pax_global_header00006660000000000000000000000064145665231370014526gustar00rootroot0000000000000052 comment=8141f05f60d42f1bf0458a973ab9118e2923cf4c golang-github-schollz-mnemonicode-1.0.1/000077500000000000000000000000001456652313700202235ustar00rootroot00000000000000golang-github-schollz-mnemonicode-1.0.1/LICENCE000066400000000000000000000024501456652313700212110ustar00rootroot00000000000000// From GitHub version/fork maintained by Stephen Paul Weber available at: // https://github.com/singpolyma/mnemonicode // // Originally from: // http://web.archive.org/web/20101031205747/http://www.tothink.com/mnemonic/ /* Copyright (c) 2000 Oren Tirosh Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ golang-github-schollz-mnemonicode-1.0.1/README.md000066400000000000000000000121071456652313700215030ustar00rootroot00000000000000Mnemonicode =========== Mnemonicode is a method for encoding binary data into a sequence of words which can be spoken over the phone, for example, and converted back to data on the other side. [![GoDoc](https://godoc.org/bitbucket.org/dchapes/mnemonicode?status.png)](https://godoc.org/bitbucket.org/dchapes/mnemonicode) Online package documentation is available via [https://godoc.org/bitbucket.org/dchapes/mnemonicode](https://godoc.org/bitbucket.org/dchapes/mnemonicode). To install the package: go get bitbucket.org/dchapes/mnemonicode or the command line programs: go get bitbucket.org/dchapes/mnemonicode/cmd/... or `go build` any Go code that imports it: import "bitbucket.org/dchapes/mnemonicode" For more information see or From the README there: There are some other somewhat similar systems that seem less satisfactory: - OTP was designed for easy typing, and for minimizing length, but as a consequence the word list contains words that are similar ("AD" and "ADD") that are poor for dictating over the phone - PGPfone has optimized "maximum phonetic distance" between words, which resolves the above problem but has some other drawbacks: - Low efficiency, as it encodes a little less than 1 bit per character; - Word quality issues, as some words are somewhat obscure to non-native speakers of English, or are awkward to use or type. Mnemonic tries to do better by being more selective about its word list. Its criteria are thus: Mandatory Criteria: - The wordlist contains 1626 words. - All words are between 4 and 7 letters long. - No word in the list is a prefix of another word (e.g. visit, visitor). - Five letter prefixes of words are sufficient to be unique. Less Strict Criteria: - The words should be usable by people all over the world. The list is far from perfect in that respect. It is heavily biased towards western culture and English in particular. The international vocabulary is simply not big enough. One can argue that even words like "hotel" or "radio" are not truly international. You will find many English words in the list but I have tried to limit them to words that are part of a beginner's vocabulary or words that have close relatives in other european languages. In some cases a word has a different meaning in another language or is pronounced very differently but for the purpose of the encoding it is still ok - I assume that when the encoding is used for spoken communication both sides speak the same language. - The words should have more than one syllable. This makes them easier to recognize when spoken, especially over a phone line. Again, you will find many exceptions. For one syllable words I have tried to use words with 3 or more consonants or words with diphthongs, making for a longer and more distinct pronounciation. As a result of this requirement the average word length has increased. I do not consider this to be a problem since my goal in limiting the word length was not to reduce the average length of encoded data but to limit the maximum length to fit in fixed-size fields or a terminal line width. - No two words on the list should sound too much alike. Soundalikes such as "sweet" and "suite" are ruled out. One of the two is chosen and the other should be accepted by the decoder's soundalike matching code or using explicit aliases for some words. - No offensive words. The rule was to avoid words that I would not like to be printed on my business card. I have extended this to words that by themselves are not offensive but are too likely to create combinations that someone may find embarrassing or offensive. This includes words dealing with religion such as "church" or "jewish" and some words with negative meanings like "problem" or "fiasco". I am sure that a creative mind (or a random number generator) can find plenty of embarrasing or offensive word combinations using only words in the list but I have tried to avoid the more obvious ones. One of my tools for this was simply a generator of random word combinations - the problematic ones stick out like a sore thumb. - Avoid words with tricky spelling or pronounciation. Even if the receiver of the message can probably spell the word close enough for the soundalike matcher to recognize it correctly I prefer avoiding such words. I believe this will help users feel more comfortable using the system, increase the level of confidence and decrease the overall error rate. Most words in the list can be spelled more or less correctly from hearing, even without knowing the word. - The word should feel right for the job. I know, this one is very subjective but some words would meet all the criteria and still not feel right for the purpose of mnemonic encoding. The word should feel like one of the words in the radio phonetic alphabets (alpha, bravo, charlie, delta etc). golang-github-schollz-mnemonicode-1.0.1/cmd/000077500000000000000000000000001456652313700207665ustar00rootroot00000000000000golang-github-schollz-mnemonicode-1.0.1/cmd/mndecode/000077500000000000000000000000001456652313700225445ustar00rootroot00000000000000golang-github-schollz-mnemonicode-1.0.1/cmd/mndecode/hexout.go000066400000000000000000000011371456652313700244110ustar00rootroot00000000000000package main import ( "encoding/hex" "io" ) const bufsize = 256 type hexdump struct { w io.Writer buf [bufsize]byte } func hexoutput(w io.Writer) io.WriteCloser { return &hexdump{w: w} } func (h *hexdump) Write(data []byte) (n int, err error) { for n < len(data) { amt := len(data) - n if hex.EncodedLen(amt) > bufsize { amt = hex.DecodedLen(bufsize) } nn := hex.Encode(h.buf[:], data[n:n+amt]) _, err := h.w.Write(h.buf[:nn]) n += amt if err != nil { return n, err } } return n, nil } func (h *hexdump) Close() error { _, err := h.w.Write([]byte{'\n'}) return err } golang-github-schollz-mnemonicode-1.0.1/cmd/mndecode/mndecode.go000066400000000000000000000015041456652313700246510ustar00rootroot00000000000000package main import ( "flag" "io" "log" "os" "path" "github.com/schollz/mnemonicode" ) func main() { log.SetFlags(0) log.SetPrefix(path.Base(os.Args[0]) + ": ") hexFlag := flag.Bool("x", false, "hex output") verboseFlag := flag.Bool("v", false, "verbose") flag.Parse() if flag.NArg() > 0 { flag.Usage() os.Exit(2) } output := io.WriteCloser(os.Stdout) if *hexFlag { output = hexoutput(output) } var n int64 var err error if true { dec := mnemonicode.NewDecoder(os.Stdin) n, err = io.Copy(output, dec) } else { w := mnemonicode.NewDecodeWriter(output) n, err = io.Copy(w, os.Stdin) if err != nil { log.Fatal(err) } err = w.Close() } if err != nil { log.Fatal(err) } if *verboseFlag { log.Println("bytes decoded:", n) } if err = output.Close(); err != nil { log.Fatal(err) } } golang-github-schollz-mnemonicode-1.0.1/cmd/mnencode/000077500000000000000000000000001456652313700225565ustar00rootroot00000000000000golang-github-schollz-mnemonicode-1.0.1/cmd/mnencode/hexin.go000066400000000000000000000020511456652313700242160ustar00rootroot00000000000000package main import ( "encoding/hex" "unicode" "unicode/utf8" "golang.org/x/text/transform" ) type hexinput bool func (h *hexinput) Reset() { *h = false } func (h *hexinput) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) { for r, sz := rune(0), 0; len(src) > 0; src = src[sz:] { if r = rune(src[0]); r < utf8.RuneSelf { sz = 1 } else { r, sz = utf8.DecodeRune(src) if sz == 1 { // Invalid rune. if !atEOF && !utf8.FullRune(src) { err = transform.ErrShortSrc break } // Just ignore it nSrc++ continue } } if unicode.IsSpace(r) { nSrc += sz continue } if sz > 1 { err = hex.InvalidByteError(src[0]) // XXX break } if len(src) < 2 { err = transform.ErrShortSrc break } if nDst+1 > len(dst) { err = transform.ErrShortDst break } sz = 2 nSrc += 2 if !*h { *h = true if r == '0' && (src[1] == 'x' || src[1] == 'X') { continue } } if _, err = hex.Decode(dst[nDst:], src[:2]); err != nil { break } nDst++ } return } golang-github-schollz-mnemonicode-1.0.1/cmd/mnencode/mnencode.go000066400000000000000000000053661456652313700247070ustar00rootroot00000000000000package main import ( "flag" "fmt" "io" "io/ioutil" "log" "os" "path" "strconv" "github.com/schollz/mnemonicode" "golang.org/x/text/transform" ) type quoted string func (q quoted) Get() interface{} { return string(q) } func (q quoted) String() string { return strconv.Quote(string(q)) } func (q *quoted) Set(s string) (err error) { if s, err = strconv.Unquote(`"` + s + `"`); err == nil { *q = quoted(s) } return } type quotedRune rune func (qr quotedRune) Get() interface{} { return rune(qr) } func (qr quotedRune) String() string { return strconv.QuoteRune(rune(qr)) } func (qr *quotedRune) Set(s string) error { r, _, x, err := strconv.UnquoteChar(s, 0) if err != nil { return err } if x != "" { return fmt.Errorf("more than a single rune") } *qr = quotedRune(r) return nil } func main() { log.SetFlags(0) log.SetPrefix(path.Base(os.Args[0]) + ": ") vlog := log.New(os.Stderr, log.Prefix(), log.Flags()) config := mnemonicode.NewDefaultConfig() prefix := quoted(config.LinePrefix) suffix := quoted(config.LineSuffix) wordsep := quoted(config.WordSeparator) groupsep := quoted(config.GroupSeparator) pad := quotedRune(config.WordPadding) flag.Var(&prefix, "prefix", "prefix each line with `string`") flag.Var(&suffix, "suffix", "suffix each line with `string`") flag.Var(&wordsep, "word", "separate each word with `wsep`") flag.Var(&groupsep, "group", "separate each word group with `gsep`") words := flag.Uint("words", config.WordsPerGroup, "words per group") groups := flag.Uint("groups", config.GroupsPerLine, "groups per line") nopad := flag.Bool("nopad", false, "do not pad words") flag.Var(&pad, "pad", "pad shorter words with `rune`") hexin := flag.Bool("x", false, "hex input") verbose := flag.Bool("v", false, "verbose") flag.Parse() if flag.NArg() > 0 { flag.Usage() os.Exit(2) } if !*verbose { vlog.SetOutput(ioutil.Discard) } config.LinePrefix = prefix.Get().(string) config.LineSuffix = suffix.Get().(string) config.GroupSeparator = groupsep.Get().(string) config.WordSeparator = wordsep.Get().(string) config.WordPadding = pad.Get().(rune) if *words > 0 { config.WordsPerGroup = *words } if *groups > 0 { config.GroupsPerLine = *groups } if *nopad { config.WordPadding = 0 } vlog.Println("Wordlist ver", mnemonicode.WordListVersion) input := io.Reader(os.Stdin) if *hexin { input = transform.NewReader(input, new(hexinput)) } var n int64 var err error if true { enc := mnemonicode.NewEncoder(os.Stdout, config) n, err = io.Copy(enc, input) if err != nil { log.Fatal(err) } err = enc.Close() } else { r := mnemonicode.NewEncodeReader(input, config) n, err = io.Copy(os.Stdout, r) } if err != nil { log.Fatal(err) } fmt.Println() vlog.Println("bytes encoded:", n) } golang-github-schollz-mnemonicode-1.0.1/fuzz.go000066400000000000000000000027131456652313700215530ustar00rootroot00000000000000// For use with go-fuzz, "github.com/dvyukov/go-fuzz" // // +build gofuzz package mnemonicode import ( "bytes" "fmt" "golang.org/x/text/transform" ) var ( tenc = NewEncodeTransformer(nil) tdec = NewDecodeTransformer() tencdec = transform.Chain(tenc, tdec) ) //go:generate go-fuzz-build bitbucket.org/dchapes/mnemonicode // Then: // go-fuzz -bin=mnemonicode-fuzz.zip -workdir=fuzz // Fuzz is for use with go-fuzz, "github.com/dvyukov/go-fuzz" func Fuzz(data []byte) int { words := EncodeWordList(nil, data) if len(words) != WordsRequired(len(data)) { panic("bad WordsRequired result") } data2, err := DecodeWordList(nil, words) if err != nil { fmt.Println("words:", words) panic(err) } if !bytes.Equal(data, data2) { fmt.Println("words:", words) panic("data != data2") } data3, _, err := transform.Bytes(tencdec, data) if err != nil { panic(err) } if !bytes.Equal(data, data3) { fmt.Println("words:", words) panic("data != data3") } if len(data) == 0 { return 0 } return 1 } //go:generate go-fuzz-build -func Fuzz2 -o mnemonicode-fuzz2.zip bitbucket.org/dchapes/mnemonicode // Then: // go-fuzz -bin=mnemonicode-fuzz2.zip -workdir=fuzz2 // Fuzz2 is another fuzz tester, this time with words as input rather than binary data. func Fuzz2(data []byte) int { _, _, err := transform.Bytes(tdec, data) if err != nil { if _, ok := err.(WordError); !ok { return 0 } fmt.Println("Unexpected error") panic(err) } return 1 } golang-github-schollz-mnemonicode-1.0.1/go.mod000066400000000000000000000001101456652313700213210ustar00rootroot00000000000000module github.com/schollz/mnemonicode require golang.org/x/text v0.3.0 golang-github-schollz-mnemonicode-1.0.1/issue002_test.go000066400000000000000000000014541456652313700231670ustar00rootroot00000000000000package mnemonicode_test import ( "bytes" "io" "strings" "testing" "github.com/schollz/mnemonicode" ) func TestIssue002(t *testing.T) { buf := &bytes.Buffer{} // Code from: const issue = `https://bitbucket.org/dchapes/mnemonicode/issues/2` config := mnemonicode.NewDefaultConfig() config.GroupsPerLine = 1 config.LineSuffix = "\n" config.GroupSeparator = "\n" config.WordPadding = 0 config.WordsPerGroup = 1 config.WordSeparator = "\n" src := strings.NewReader("abcdefgh") r := mnemonicode.NewEncodeReader(src, config) //io.Copy(os.Stdout, r) io.Copy(buf, r) // Note, in the issue the expected trailing newline is missing. const expected = `bogart atlas safari airport cabaret shock` if s := buf.String(); s != expected { t.Errorf("%v\n\tgave %q\n\twant%q", issue, s, expected) } } golang-github-schollz-mnemonicode-1.0.1/mnemonicode.go000066400000000000000000000320731456652313700230540ustar00rootroot00000000000000// Package mnemonicode … package mnemonicode import ( "fmt" "io" "strings" "unicode/utf8" "golang.org/x/text/transform" ) // WordsRequired returns the number of words required to encode input // data of length bytes using mnomonic encoding. // // Every four bytes of input is encoded into three words. If there // is an extra one or two bytes they get an extra one or two words // respectively. If there is an extra three bytes, they will be encoded // into three words with the last word being one of a small set of very // short words (only needed to encode the last 3 bits). func WordsRequired(length int) int { return ((length + 1) * 3) / 4 } // A Config structure contains options for mneomonic encoding. // // {PREFIX}word{wsep}word{gsep}word{wsep}word{SUFFIX} type Config struct { LinePrefix string LineSuffix string WordSeparator string GroupSeparator string WordsPerGroup uint GroupsPerLine uint WordPadding rune } var defaultConfig = Config{ LinePrefix: "", LineSuffix: "\n", WordSeparator: " ", GroupSeparator: " - ", WordsPerGroup: 3, GroupsPerLine: 3, WordPadding: ' ', } // NewDefaultConfig returns a newly allocated Config initialised with default values. func NewDefaultConfig() *Config { r := new(Config) *r = defaultConfig return r } // NewEncodeReader returns a new io.Reader that will return a // formatted list of mnemonic words representing the bytes in r. // // The configuration of the word formatting is controlled // by c, which can be nil for default formatting. func NewEncodeReader(r io.Reader, c *Config) io.Reader { t := NewEncodeTransformer(c) return transform.NewReader(r, t) } // NewEncoder returns a new io.WriteCloser that will write a formatted // list of mnemonic words representing the bytes written to w. The user // needs to call Close to flush unwritten bytes that may be buffered. // // The configuration of the word formatting is controlled // by c, which can be nil for default formatting. func NewEncoder(w io.Writer, c *Config) io.WriteCloser { t := NewEncodeTransformer(c) return transform.NewWriter(w, t) } // NewEncodeTransformer returns a new transformer // that encodes bytes into mnemonic words. // // The configuration of the word formatting is controlled // by c, which can be nil for default formatting. func NewEncodeTransformer(c *Config) transform.Transformer { if c == nil { c = &defaultConfig } return &enctrans{ c: *c, state: needPrefix, } } type enctrans struct { c Config state encTransState wordCnt uint groupCnt uint wordidx [3]int wordidxcnt int // remaining indexes in wordidx; wordidx[3-wordidxcnt:] } func (t *enctrans) Reset() { t.state = needPrefix t.wordCnt = 0 t.groupCnt = 0 t.wordidxcnt = 0 } type encTransState uint8 const ( needNothing = iota needPrefix needWordSep needGroupSep needSuffix ) func (t *enctrans) strState() (str string, nextState encTransState) { switch t.state { case needPrefix: str = t.c.LinePrefix case needWordSep: str = t.c.WordSeparator case needGroupSep: str = t.c.GroupSeparator case needSuffix: str = t.c.LineSuffix nextState = needPrefix } return } func (t *enctrans) advState() { t.wordCnt++ if t.wordCnt < t.c.WordsPerGroup { t.state = needWordSep } else { t.wordCnt = 0 t.groupCnt++ if t.groupCnt < t.c.GroupsPerLine { t.state = needGroupSep } else { t.groupCnt = 0 t.state = needSuffix } } } // transformWords consumes words from wordidx copying the words with // formatting into dst. // On return, if err==nil, all words were consumed (wordidxcnt==0). func (t *enctrans) transformWords(dst []byte) (nDst int, err error) { //log.Println("transformWords: len(dst)=",len(dst),"wordidxcnt=",t.wordidxcnt) for t.wordidxcnt > 0 { for t.state != needNothing { str, nextState := t.strState() if len(dst) < len(str) { return nDst, transform.ErrShortDst } n := copy(dst, str) dst = dst[n:] nDst += n t.state = nextState } word := wordList[t.wordidx[3-t.wordidxcnt]] n := len(word) if n < longestWord { if rlen := utf8.RuneLen(t.c.WordPadding); rlen > 0 { n += (longestWord - n) * rlen } } if len(dst) < n { return nDst, transform.ErrShortDst } n = copy(dst, word) t.wordidxcnt-- dst = dst[n:] nDst += n if t.c.WordPadding != 0 { for i := n; i < longestWord; i++ { n = utf8.EncodeRune(dst, t.c.WordPadding) dst = dst[n:] nDst += n } } t.advState() } return nDst, nil } // Transform implements the transform.Transformer interface. func (t *enctrans) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) { //log.Printf("Transform(%d,%d,%t)\n", len(dst), len(src), atEOF) var n int for { if t.wordidxcnt > 0 { n, err = t.transformWords(dst) dst = dst[n:] nDst += n if err != nil { //log.Printf("\t\t\tRet1: (%d) %d, %d, %v\n", t.wordidxcnt, nDst, nSrc, err) return } } var x uint32 switch { case len(src) >= 4: x = uint32(src[0]) x |= uint32(src[1]) << 8 x |= uint32(src[2]) << 16 x |= uint32(src[3]) << 24 src = src[4:] nSrc += 4 t.wordidx[0] = int(x % base) t.wordidx[1] = int(x/base) % base t.wordidx[2] = int(x/base/base) % base t.wordidxcnt = 3 //log.Printf("\t\tConsumed 4 bytes (%d, %d)", nDst, nSrc) //continue case len(src) == 0: //log.Printf("\t\t\tRet2: (%d) %d, %d, %v\n", t.wordidxcnt, nDst, nSrc, err) return case !atEOF: //log.Printf("\t\t!atEOF (%d, %d)", nDst, nSrc) err = transform.ErrShortSrc return default: x = 0 n = len(src) for i := n - 1; i >= 0; i-- { x <<= 8 x |= uint32(src[i]) } t.wordidx[3-n] = int(x % base) if n >= 2 { t.wordidx[4-n] = int(x/base) % base } if n == 3 { t.wordidx[2] = base + int(x/base/base)%7 } src = src[n:] nSrc += n t.wordidxcnt = n //log.Printf("\t\tatEOF (%d) (%d, %d)", t.wordidxcnt, nDst, nSrc) //continue } } } // // NewDecoder returns a new io.Reader that will return the // decoded bytes from mnemonic words in r. Unrecognized // words in r will cause reads to return an error. func NewDecoder(r io.Reader) io.Reader { t := NewDecodeTransformer() return transform.NewReader(r, t) } // NewDecodeWriter returns a new io.WriteCloser that will // write decoded bytes from mnemonic words written to it. // Unrecognized words will cause a write error. The user needs // to call Close to flush unwritten bytes that may be buffered. func NewDecodeWriter(w io.Writer) io.WriteCloser { t := NewDecodeTransformer() return transform.NewWriter(w, t) } // NewDecodeTransformer returns a new transform // that decodes mnemonic words into the represented // bytes. Unrecognized words will trigger an error. func NewDecodeTransformer() transform.Transformer { return &dectrans{wordidx: make([]int, 0, 3)} } type dectrans struct { wordidx []int short bool // last word in wordidx is/was short } func (t *dectrans) Reset() { t.wordidx = nil t.short = false } func (t *dectrans) transformWords(dst []byte) (int, error) { //log.Println("transformWords: len(dst)=",len(dst),"len(t.wordidx)=", len(t.wordidx)) n := len(t.wordidx) if n == 3 && !t.short { n = 4 } if len(dst) < n { return 0, transform.ErrShortDst } for len(t.wordidx) < 3 { t.wordidx = append(t.wordidx, 0) } x := uint32(t.wordidx[2]) x *= base x += uint32(t.wordidx[1]) x *= base x += uint32(t.wordidx[0]) for i := 0; i < n; i++ { dst[i] = byte(x) x >>= 8 } t.wordidx = t.wordidx[:0] return n, nil } type WordError interface { error Word() string } type UnexpectedWordError string type UnexpectedEndWordError string type UnknownWordError string func (e UnexpectedWordError) Word() string { return string(e) } func (e UnexpectedEndWordError) Word() string { return string(e) } func (e UnknownWordError) Word() string { return string(e) } func (e UnexpectedWordError) Error() string { return fmt.Sprintf("mnemonicode: unexpected word after short word: %q", string(e)) } func (e UnexpectedEndWordError) Error() string { return fmt.Sprintf("mnemonicode: unexpected end word: %q", string(e)) } func (e UnknownWordError) Error() string { return fmt.Sprintf("mnemonicode: unknown word: %q", string(e)) } // Transform implements the transform.Transformer interface. func (t *dectrans) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) { //log.Printf("Transform(%d,%d,%t)\n", len(dst), len(src), atEOF) var n int for len(t.wordidx) > 0 || len(src) > 0 { for len(t.wordidx) < 3 { var word []byte var idx int //n, word, err = bufio.ScanWords(src, atEOF) n, word, err = scanWords(src, atEOF) src = src[n:] nSrc += n if err != nil { //log.Print("ScanWords error:", err) return } if word == nil { if atEOF { //log.Printf("atEOF (%d, %d) %d, %d", nDst, nSrc, n, len(src)) n = len(src) src = src[n:] nSrc += n break } //log.Printf("\t\t!atEOF (%d, %d)", nDst, nSrc) err = transform.ErrShortSrc return } if t.short { err = UnexpectedWordError(word) //log.Print("short error:", err) return } idx, _, t.short, err = closestWordIdx(string(word), len(t.wordidx) == 2) if err != nil { //log.Print("closestWordIdx error:", err) return } t.wordidx = append(t.wordidx, idx) } if len(t.wordidx) > 0 { n, err = t.transformWords(dst) dst = dst[n:] nDst += n if n != 4 { //log.Println("transformWords returned:", n, err) //log.Println("len(t.wordidx):", len(t.wordidx), len(src)) } if err != nil { //log.Printf("\t\t\tRet1: (%d) %d, %d, %v\n", len(t.wordidx), nDst, nSrc, err) return } } } return } // const base = 1626 // EncodeWordList encodes src into mnemomic words which are appended to dst. // The final wordlist is returned. // There will be WordsRequired(len(src)) words appeneded. func EncodeWordList(dst []string, src []byte) (result []string) { if n := len(dst) + WordsRequired(len(src)); cap(dst) < n { result = make([]string, len(dst), n) copy(result, dst) } else { result = dst } var x uint32 for len(src) >= 4 { x = uint32(src[0]) x |= uint32(src[1]) << 8 x |= uint32(src[2]) << 16 x |= uint32(src[3]) << 24 src = src[4:] i0 := int(x % base) i1 := int(x/base) % base i2 := int(x/base/base) % base result = append(result, wordList[i0], wordList[i1], wordList[i2]) } if len(src) > 0 { x = 0 for i := len(src) - 1; i >= 0; i-- { x <<= 8 x |= uint32(src[i]) } i := int(x % base) result = append(result, wordList[i]) if len(src) >= 2 { i = int(x/base) % base result = append(result, wordList[i]) } if len(src) == 3 { i = base + int(x/base/base)%7 result = append(result, wordList[i]) } } return result } func closestWordIdx(word string, shortok bool) (idx int, exact, short bool, err error) { word = strings.ToLower(word) if idx, exact = wordMap[word]; !exact { // TODO(dchapes): normalize unicode, remove accents, etc // TODO(dchapes): phonetic algorithm or other closest match err = UnknownWordError(word) return } if short = (idx >= base); short { idx -= base if !shortok { err = UnexpectedEndWordError(word) } } return } // DecodeWordList decodes the mnemonic words in src into bytes which are // appended to dst. func DecodeWordList(dst []byte, src []string) (result []byte, err error) { if n := (len(src)+2)/3*4 + len(dst); cap(dst) < n { result = make([]byte, len(dst), n) copy(result, dst) } else { result = dst } var idx [3]int for len(src) > 3 { if idx[0], _, _, err = closestWordIdx(src[0], false); err != nil { return nil, err } if idx[1], _, _, err = closestWordIdx(src[1], false); err != nil { return nil, err } if idx[2], _, _, err = closestWordIdx(src[2], false); err != nil { return nil, err } src = src[3:] x := uint32(idx[2]) x *= base x += uint32(idx[1]) x *= base x += uint32(idx[0]) result = append(result, byte(x), byte(x>>8), byte(x>>16), byte(x>>24)) } if len(src) > 0 { var short bool idx[1] = 0 idx[2] = 0 n := len(src) for i := 0; i < n; i++ { idx[i], _, short, err = closestWordIdx(src[i], i == 2) if err != nil { return nil, err } } x := uint32(idx[2]) x *= base x += uint32(idx[1]) x *= base x += uint32(idx[0]) result = append(result, byte(x)) if n > 1 { result = append(result, byte(x>>8)) } if n > 2 { result = append(result, byte(x>>16)) if !short { result = append(result, byte(x>>24)) } } } /* for len(src) > 0 { short := false n := len(src) if n > 3 { n = 3 } for i := 0; i < n; i++ { idx[i], _, err = closestWordIdx(src[i]) if err != nil { return nil, err } if idx[i] >= base { if i != 2 || len(src) != 3 { return nil, UnexpectedEndWord(src[i]) } short = true idx[i] -= base } } for i := n; i < 3; i++ { idx[i] = 0 } src = src[n:] x := uint32(idx[2]) x *= base x += uint32(idx[1]) x *= base x += uint32(idx[0]) result = append(result, byte(x)) if n > 1 { result = append(result, byte(x>>8)) } if n > 2 { result = append(result, byte(x>>16)) if !short { result = append(result, byte(x>>24)) } } } */ return result, nil } golang-github-schollz-mnemonicode-1.0.1/mnemonicode_test.go000066400000000000000000000146421456652313700241150ustar00rootroot00000000000000package mnemonicode import ( "bytes" "encoding/hex" "fmt" "strings" "testing" "golang.org/x/text/transform" ) func TestWordsReq(t *testing.T) { for i, n := range []int{0, 1, 2, 3, 3, 4, 5, 6, 6, 7, 8, 9, 9, 10} { r := WordsRequired(i) if r != n { t.Errorf("WordsRequired(%d) returned %d, expected %d", i, r, n) } } } var testData = []struct { hex string words []string }{ {"01", []string{"acrobat"}}, {"0102", []string{"opera", "academy"}}, {"010203", []string{"kayak", "cement", "ego"}}, {"01020304", []string{"papa", "twist", "alpine"}}, {"0102030405", []string{"papa", "twist", "alpine", "admiral"}}, {"010203040506", []string{"papa", "twist", "alpine", "shine", "academy"}}, {"01020304050607", []string{"papa", "twist", "alpine", "chess", "flute", "ego"}}, {"0102030405060708", []string{"papa", "twist", "alpine", "content", "sailor", "athena"}}, {"00", []string{"academy"}}, {"5A06", []string{"academy", "acrobat"}}, {"FE5D28", []string{"academy", "acrobat", "fax"}}, {"A2B55000", []string{"academy", "acrobat", "active"}}, {"A2B5500003", []string{"academy", "acrobat", "active", "actor"}}, {"A2B550006B19", []string{"academy", "acrobat", "active", "actor", "adam"}}, {"A2B550000F7128", []string{"academy", "acrobat", "active", "actor", "adam", "fax"}}, {"A2B550009FCFC900", []string{"academy", "acrobat", "active", "actor", "adam", "admiral"}}, {"FF", []string{"exact"}}, {"FFFF", []string{"nevada", "archive"}}, {"FFFFFF", []string{"claudia", "photo", "yes"}}, {"FFFFFFFF", []string{"natural", "analyze", "verbal"}}, {"123456789ABCDEF123456789ABCDEF012345", []string{ "plastic", "roger", "vincent", "pilgrim", "flame", "secure", "apropos", "polka", "earth", "radio", "modern", "aladdin", "marion", "airline"}}, } func compareWordList(tb testing.TB, expected, got []string, args ...interface{}) { fail := false if len(expected) != len(got) { fail = true } for i := 0; !fail && i < len(expected); i++ { fail = expected[i] != got[i] } if fail { prefix := "" if len(args) > 0 { prefix += fmt.Sprintln(args...) prefix = prefix[:len(prefix)-1] + ": " } tb.Errorf("%vexpected %v, got %v", prefix, expected, got) } } func TestEncodeWordList(t *testing.T) { var result []string for i, d := range testData { raw, err := hex.DecodeString(d.hex) if err != nil { t.Fatal("bad test data:", i, err) } result = EncodeWordList(result, raw) compareWordList(t, d.words, result, i, d.hex) result = result[:0] } } func TestDecodeWordList(t *testing.T) { var result []byte var err error for i, d := range testData { raw, _ := hex.DecodeString(d.hex) result, err = DecodeWordList(result, d.words) if err != nil { t.Errorf("%2d %v failed: %v", i, d.words, err) continue } if !bytes.Equal(raw, result) { t.Errorf("%2d %v expected %v got %v", i, d.words, raw, result) } result = result[:0] } } func TestEncodeTransformer(t *testing.T) { cfg := NewDefaultConfig() cfg.GroupSeparator = " " enc := NewEncodeTransformer(cfg) for i, d := range testData { raw, err := hex.DecodeString(d.hex) if err != nil { t.Fatal("bad test data:", i, err) } result, _, err := transform.Bytes(enc, raw) if err != nil { t.Errorf("%2d %v failed: %v", i, d.words, err) continue } //t.Logf("%q", result) words := strings.Fields(string(result)) compareWordList(t, d.words, words, i, d.hex) } } func TestDecodeTransformer(t *testing.T) { dec := NewDecodeTransformer() for i, d := range testData { raw, _ := hex.DecodeString(d.hex) words := strings.Join(d.words, " ") result, _, err := transform.Bytes(dec, []byte(words)) if err != nil { t.Errorf("%2d %v failed: %v", i, d.words, err) continue } if !bytes.Equal(raw, result) { t.Errorf("%2d %v expected %v got %v", i, d.words, raw, result) } } } func TestEncodeFormatting(t *testing.T) { raw, _ := hex.DecodeString(testData[20].hex) input := string(raw) //words := testData[20].words tests := []struct { cfg *Config formatted string }{ {nil, "plastic roger vincent - pilgrim flame secure - apropos polka earth \nradio modern aladdin - marion airline"}, {&Config{ LinePrefix: "{P}", LineSuffix: "{S}\n", WordSeparator: "{w}", GroupSeparator: "{g}", WordsPerGroup: 2, GroupsPerLine: 2, WordPadding: '·', }, `{P}plastic{w}roger··{g}vincent{w}pilgrim{S} {P}flame··{w}secure·{g}apropos{w}polka··{S} {P}earth··{w}radio··{g}modern·{w}aladdin{S} {P}marion·{w}airline`}, } for i, d := range tests { enc := NewEncodeTransformer(d.cfg) result, _, err := transform.String(enc, input) if err != nil { t.Errorf("%2d transform failed: %v", i, err) continue } if result != d.formatted { t.Errorf("%2d expected:\n%q\ngot:\n%q", i, d.formatted, result) } } } func BenchmarkEncodeWordList(b *testing.B) { // the list of all known words (except the short end words) data, err := DecodeWordList(nil, wordList[:base]) if err != nil { b.Fatal("DecodeWordList failed:", err) } b.SetBytes(int64(len(data))) b.ReportAllocs() b.ResetTimer() var words []string for i := 0; i < b.N; i++ { words = EncodeWordList(words[:0], data) } } func BenchmarkDencodeWordList(b *testing.B) { b.ReportAllocs() var buf []byte var err error // decode the list of all known words (except the short end words) for i := 0; i < b.N; i++ { buf, err = DecodeWordList(buf[:0], wordList[:base]) if err != nil { b.Fatal("DecodeWordList failed:", err) } } b.SetBytes(int64(len(buf))) } func BenchmarkEncodeTransformer(b *testing.B) { // the list of all known words (except the short end words) data, err := DecodeWordList(nil, wordList[:base]) if err != nil { b.Fatal("DecodeWordList failed:", err) } enc := NewEncodeTransformer(nil) b.SetBytes(int64(len(data))) b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ { _, _, err := transform.Bytes(enc, data) if err != nil { b.Fatal("encode transformer error:", err) } } } func BenchmarkDecodeTransformer(b *testing.B) { data, err := DecodeWordList(nil, wordList[:base]) if err != nil { b.Fatal("DecodeWordList failed:", err) } enc := NewEncodeTransformer(nil) words, _, err := transform.Bytes(enc, data) if err != nil { b.Fatal("encode transformer error:", err) } b.SetBytes(int64(len(data))) dec := NewDecodeTransformer() b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ { _, _, err := transform.Bytes(dec, words) if err != nil { b.Fatal("decode transformer error:", err) } } } golang-github-schollz-mnemonicode-1.0.1/scan_words.go000066400000000000000000000021331456652313700227130ustar00rootroot00000000000000package mnemonicode import ( "unicode" "unicode/utf8" ) // modified version of bufio.ScanWords from bufio/scan.go // scanWords is a split function for a Scanner that returns // each non-letter separated word of text, with surrounding // non-leters deleted. It will never return an empty string. // The definition of letter is set by unicode.IsLetter. func scanWords(data []byte, atEOF bool) (advance int, token []byte, err error) { // Skip leading non-letters. start := 0 for width := 0; start < len(data); start += width { var r rune r, width = utf8.DecodeRune(data[start:]) if unicode.IsLetter(r) { break } } if atEOF && len(data) == 0 { return 0, nil, nil } // Scan until non-letter, marking end of word. for width, i := 0, start; i < len(data); i += width { var r rune r, width = utf8.DecodeRune(data[i:]) if !unicode.IsLetter(r) { return i + width, data[start:i], nil } } // If we're at EOF, we have a final, non-empty, non-terminated word. Return it. if atEOF && len(data) > start { return len(data), data[start:], nil } // Request more data. return 0, nil, nil } golang-github-schollz-mnemonicode-1.0.1/word_list.go000066400000000000000000000401271456652313700225640ustar00rootroot00000000000000package mnemonicode // WordListVersion is the version of compiled in word list. const WordListVersion = "0.7" var wordMap = make(map[string]int, len(wordList)) func init() { for i, w := range wordList { wordMap[w] = i } } const longestWord = 7 var wordList = []string{ "academy", "acrobat", "active", "actor", "adam", "admiral", "adrian", "africa", "agenda", "agent", "airline", "airport", "aladdin", "alarm", "alaska", "albert", "albino", "album", "alcohol", "alex", "algebra", "alibi", "alice", "alien", "alpha", "alpine", "amadeus", "amanda", "amazon", "amber", "america", "amigo", "analog", "anatomy", "angel", "animal", "antenna", "antonio", "apollo", "april", "archive", "arctic", "arizona", "arnold", "aroma", "arthur", "artist", "asia", "aspect", "aspirin", "athena", "athlete", "atlas", "audio", "august", "austria", "axiom", "aztec", "balance", "ballad", "banana", "bandit", "banjo", "barcode", "baron", "basic", "battery", "belgium", "berlin", "bermuda", "bernard", "bikini", "binary", "bingo", "biology", "block", "blonde", "bonus", "boris", "boston", "boxer", "brandy", "bravo", "brazil", "bronze", "brown", "bruce", "bruno", "burger", "burma", "cabinet", "cactus", "cafe", "cairo", "cake", "calypso", "camel", "camera", "campus", "canada", "canal", "cannon", "canoe", "cantina", "canvas", "canyon", "capital", "caramel", "caravan", "carbon", "cargo", "carlo", "carol", "carpet", "cartel", "casino", "castle", "castro", "catalog", "caviar", "cecilia", "cement", "center", "century", "ceramic", "chamber", "chance", "change", "chaos", "charlie", "charm", "charter", "chef", "chemist", "cherry", "chess", "chicago", "chicken", "chief", "china", "cigar", "cinema", "circus", "citizen", "city", "clara", "classic", "claudia", "clean", "client", "climax", "clinic", "clock", "club", "cobra", "coconut", "cola", "collect", "colombo", "colony", "color", "combat", "comedy", "comet", "command", "compact", "company", "complex", "concept", "concert", "connect", "consul", "contact", "context", "contour", "control", "convert", "copy", "corner", "corona", "correct", "cosmos", "couple", "courage", "cowboy", "craft", "crash", "credit", "cricket", "critic", "crown", "crystal", "cuba", "culture", "dallas", "dance", "daniel", "david", "decade", "decimal", "deliver", "delta", "deluxe", "demand", "demo", "denmark", "derby", "design", "detect", "develop", "diagram", "dialog", "diamond", "diana", "diego", "diesel", "diet", "digital", "dilemma", "diploma", "direct", "disco", "disney", "distant", "doctor", "dollar", "dominic", "domino", "donald", "dragon", "drama", "dublin", "duet", "dynamic", "east", "ecology", "economy", "edgar", "egypt", "elastic", "elegant", "element", "elite", "elvis", "email", "energy", "engine", "english", "episode", "equator", "escort", "ethnic", "europe", "everest", "evident", "exact", "example", "exit", "exotic", "export", "express", "extra", "fabric", "factor", "falcon", "family", "fantasy", "fashion", "fiber", "fiction", "fidel", "fiesta", "figure", "film", "filter", "final", "finance", "finish", "finland", "flash", "florida", "flower", "fluid", "flute", "focus", "ford", "forest", "formal", "format", "formula", "fortune", "forum", "fragile", "france", "frank", "friend", "frozen", "future", "gabriel", "galaxy", "gallery", "gamma", "garage", "garden", "garlic", "gemini", "general", "genetic", "genius", "germany", "global", "gloria", "golf", "gondola", "gong", "good", "gordon", "gorilla", "grand", "granite", "graph", "green", "group", "guide", "guitar", "guru", "hand", "happy", "harbor", "harmony", "harvard", "havana", "hawaii", "helena", "hello", "henry", "hilton", "history", "horizon", "hotel", "human", "humor", "icon", "idea", "igloo", "igor", "image", "impact", "import", "index", "india", "indigo", "input", "insect", "instant", "iris", "italian", "jacket", "jacob", "jaguar", "janet", "japan", "jargon", "jazz", "jeep", "john", "joker", "jordan", "jumbo", "june", "jungle", "junior", "jupiter", "karate", "karma", "kayak", "kermit", "kilo", "king", "koala", "korea", "labor", "lady", "lagoon", "laptop", "laser", "latin", "lava", "lecture", "left", "legal", "lemon", "level", "lexicon", "liberal", "libra", "limbo", "limit", "linda", "linear", "lion", "liquid", "liter", "little", "llama", "lobby", "lobster", "local", "logic", "logo", "lola", "london", "lotus", "lucas", "lunar", "machine", "macro", "madam", "madonna", "madrid", "maestro", "magic", "magnet", "magnum", "major", "mama", "mambo", "manager", "mango", "manila", "marco", "marina", "market", "mars", "martin", "marvin", "master", "matrix", "maximum", "media", "medical", "mega", "melody", "melon", "memo", "mental", "mentor", "menu", "mercury", "message", "metal", "meteor", "meter", "method", "metro", "mexico", "miami", "micro", "million", "mineral", "minimum", "minus", "minute", "miracle", "mirage", "miranda", "mister", "mixer", "mobile", "model", "modem", "modern", "modular", "moment", "monaco", "monica", "monitor", "mono", "monster", "montana", "morgan", "motel", "motif", "motor", "mozart", "multi", "museum", "music", "mustang", "natural", "neon", "nepal", "neptune", "nerve", "neutral", "nevada", "news", "ninja", "nirvana", "normal", "nova", "novel", "nuclear", "numeric", "nylon", "oasis", "object", "observe", "ocean", "octopus", "olivia", "olympic", "omega", "opera", "optic", "optimal", "orange", "orbit", "organic", "orient", "origin", "orlando", "oscar", "oxford", "oxygen", "ozone", "pablo", "pacific", "pagoda", "palace", "pamela", "panama", "panda", "panel", "panic", "paradox", "pardon", "paris", "parker", "parking", "parody", "partner", "passage", "passive", "pasta", "pastel", "patent", "patriot", "patrol", "patron", "pegasus", "pelican", "penguin", "pepper", "percent", "perfect", "perfume", "period", "permit", "person", "peru", "phone", "photo", "piano", "picasso", "picnic", "picture", "pigment", "pilgrim", "pilot", "pirate", "pixel", "pizza", "planet", "plasma", "plaster", "plastic", "plaza", "pocket", "poem", "poetic", "poker", "polaris", "police", "politic", "polo", "polygon", "pony", "popcorn", "popular", "postage", "postal", "precise", "prefix", "premium", "present", "price", "prince", "printer", "prism", "private", "product", "profile", "program", "project", "protect", "proton", "public", "pulse", "puma", "pyramid", "queen", "radar", "radio", "random", "rapid", "rebel", "record", "recycle", "reflex", "reform", "regard", "regular", "relax", "report", "reptile", "reverse", "ricardo", "ringo", "ritual", "robert", "robot", "rocket", "rodeo", "romeo", "royal", "russian", "safari", "salad", "salami", "salmon", "salon", "salute", "samba", "sandra", "santana", "sardine", "school", "screen", "script", "second", "secret", "section", "segment", "select", "seminar", "senator", "senior", "sensor", "serial", "service", "sheriff", "shock", "sierra", "signal", "silicon", "silver", "similar", "simon", "single", "siren", "slogan", "social", "soda", "solar", "solid", "solo", "sonic", "soviet", "special", "speed", "spiral", "spirit", "sport", "static", "station", "status", "stereo", "stone", "stop", "street", "strong", "student", "studio", "style", "subject", "sultan", "super", "susan", "sushi", "suzuki", "switch", "symbol", "system", "tactic", "tahiti", "talent", "tango", "tarzan", "taxi", "telex", "tempo", "tennis", "texas", "textile", "theory", "thermos", "tiger", "titanic", "tokyo", "tomato", "topic", "tornado", "toronto", "torpedo", "total", "totem", "tourist", "tractor", "traffic", "transit", "trapeze", "travel", "tribal", "trick", "trident", "trilogy", "tripod", "tropic", "trumpet", "tulip", "tuna", "turbo", "twist", "ultra", "uniform", "union", "uranium", "vacuum", "valid", "vampire", "vanilla", "vatican", "velvet", "ventura", "venus", "vertigo", "veteran", "victor", "video", "vienna", "viking", "village", "vincent", "violet", "violin", "virtual", "virus", "visa", "vision", "visitor", "visual", "vitamin", "viva", "vocal", "vodka", "volcano", "voltage", "volume", "voyage", "water", "weekend", "welcome", "western", "window", "winter", "wizard", "wolf", "world", "xray", "yankee", "yoga", "yogurt", "yoyo", "zebra", "zero", "zigzag", "zipper", "zodiac", "zoom", "abraham", "action", "address", "alabama", "alfred", "almond", "ammonia", "analyze", "annual", "answer", "apple", "arena", "armada", "arsenal", "atlanta", "atomic", "avenue", "average", "bagel", "baker", "ballet", "bambino", "bamboo", "barbara", "basket", "bazaar", "benefit", "bicycle", "bishop", "blitz", "bonjour", "bottle", "bridge", "british", "brother", "brush", "budget", "cabaret", "cadet", "candle", "capitan", "capsule", "career", "cartoon", "channel", "chapter", "cheese", "circle", "cobalt", "cockpit", "college", "compass", "comrade", "condor", "crimson", "cyclone", "darwin", "declare", "degree", "delete", "delphi", "denver", "desert", "divide", "dolby", "domain", "domingo", "double", "drink", "driver", "eagle", "earth", "echo", "eclipse", "editor", "educate", "edward", "effect", "electra", "emerald", "emotion", "empire", "empty", "escape", "eternal", "evening", "exhibit", "expand", "explore", "extreme", "ferrari", "first", "flag", "folio", "forget", "forward", "freedom", "fresh", "friday", "fuji", "galileo", "garcia", "genesis", "gold", "gravity", "habitat", "hamlet", "harlem", "helium", "holiday", "house", "hunter", "ibiza", "iceberg", "imagine", "infant", "isotope", "jackson", "jamaica", "jasmine", "java", "jessica", "judo", "kitchen", "lazarus", "letter", "license", "lithium", "loyal", "lucky", "magenta", "mailbox", "manual", "marble", "mary", "maxwell", "mayor", "milk", "monarch", "monday", "money", "morning", "mother", "mystery", "native", "nectar", "nelson", "network", "next", "nikita", "nobel", "nobody", "nominal", "norway", "nothing", "number", "october", "office", "oliver", "opinion", "option", "order", "outside", "package", "pancake", "pandora", "panther", "papa", "patient", "pattern", "pedro", "pencil", "people", "phantom", "philips", "pioneer", "pluto", "podium", "portal", "potato", "prize", "process", "protein", "proxy", "pump", "pupil", "python", "quality", "quarter", "quiet", "rabbit", "radical", "radius", "rainbow", "ralph", "ramirez", "ravioli", "raymond", "respect", "respond", "result", "resume", "retro", "richard", "right", "risk", "river", "roger", "roman", "rondo", "sabrina", "salary", "salsa", "sample", "samuel", "saturn", "savage", "scarlet", "scoop", "scorpio", "scratch", "scroll", "sector", "serpent", "shadow", "shampoo", "sharon", "sharp", "short", "shrink", "silence", "silk", "simple", "slang", "smart", "smoke", "snake", "society", "sonar", "sonata", "soprano", "source", "sparta", "sphere", "spider", "sponsor", "spring", "acid", "adios", "agatha", "alamo", "alert", "almanac", "aloha", "andrea", "anita", "arcade", "aurora", "avalon", "baby", "baggage", "balloon", "bank", "basil", "begin", "biscuit", "blue", "bombay", "brain", "brenda", "brigade", "cable", "carmen", "cello", "celtic", "chariot", "chrome", "citrus", "civil", "cloud", "common", "compare", "cool", "copper", "coral", "crater", "cubic", "cupid", "cycle", "depend", "door", "dream", "dynasty", "edison", "edition", "enigma", "equal", "eric", "event", "evita", "exodus", "extend", "famous", "farmer", "food", "fossil", "frog", "fruit", "geneva", "gentle", "george", "giant", "gilbert", "gossip", "gram", "greek", "grille", "hammer", "harvest", "hazard", "heaven", "herbert", "heroic", "hexagon", "husband", "immune", "inca", "inch", "initial", "isabel", "ivory", "jason", "jerome", "joel", "joshua", "journal", "judge", "juliet", "jump", "justice", "kimono", "kinetic", "leonid", "lima", "maze", "medusa", "member", "memphis", "michael", "miguel", "milan", "mile", "miller", "mimic", "mimosa", "mission", "monkey", "moral", "moses", "mouse", "nancy", "natasha", "nebula", "nickel", "nina", "noise", "orchid", "oregano", "origami", "orinoco", "orion", "othello", "paper", "paprika", "prelude", "prepare", "pretend", "profit", "promise", "provide", "puzzle", "remote", "repair", "reply", "rival", "riviera", "robin", "rose", "rover", "rudolf", "saga", "sahara", "scholar", "shelter", "ship", "shoe", "sigma", "sister", "sleep", "smile", "spain", "spark", "split", "spray", "square", "stadium", "star", "storm", "story", "strange", "stretch", "stuart", "subway", "sugar", "sulfur", "summer", "survive", "sweet", "swim", "table", "taboo", "target", "teacher", "telecom", "temple", "tibet", "ticket", "tina", "today", "toga", "tommy", "tower", "trivial", "tunnel", "turtle", "twin", "uncle", "unicorn", "unique", "update", "valery", "vega", "version", "voodoo", "warning", "william", "wonder", "year", "yellow", "young", "absent", "absorb", "accent", "alfonso", "alias", "ambient", "andy", "anvil", "appear", "apropos", "archer", "ariel", "armor", "arrow", "austin", "avatar", "axis", "baboon", "bahama", "bali", "balsa", "bazooka", "beach", "beast", "beatles", "beauty", "before", "benny", "betty", "between", "beyond", "billy", "bison", "blast", "bless", "bogart", "bonanza", "book", "border", "brave", "bread", "break", "broken", "bucket", "buenos", "buffalo", "bundle", "button", "buzzer", "byte", "caesar", "camilla", "canary", "candid", "carrot", "cave", "chant", "child", "choice", "chris", "cipher", "clarion", "clark", "clever", "cliff", "clone", "conan", "conduct", "congo", "content", "costume", "cotton", "cover", "crack", "current", "danube", "data", "decide", "desire", "detail", "dexter", "dinner", "dispute", "donor", "druid", "drum", "easy", "eddie", "enjoy", "enrico", "epoxy", "erosion", "except", "exile", "explain", "fame", "fast", "father", "felix", "field", "fiona", "fire", "fish", "flame", "flex", "flipper", "float", "flood", "floor", "forbid", "forever", "fractal", "frame", "freddie", "front", "fuel", "gallop", "game", "garbo", "gate", "gibson", "ginger", "giraffe", "gizmo", "glass", "goblin", "gopher", "grace", "gray", "gregory", "grid", "griffin", "ground", "guest", "gustav", "gyro", "hair", "halt", "harris", "heart", "heavy", "herman", "hippie", "hobby", "honey", "hope", "horse", "hostel", "hydro", "imitate", "info", "ingrid", "inside", "invent", "invest", "invite", "iron", "ivan", "james", "jester", "jimmy", "join", "joseph", "juice", "julius", "july", "justin", "kansas", "karl", "kevin", "kiwi", "ladder", "lake", "laura", "learn", "legacy", "legend", "lesson", "life", "light", "list", "locate", "lopez", "lorenzo", "love", "lunch", "malta", "mammal", "margo", "marion", "mask", "match", "mayday", "meaning", "mercy", "middle", "mike", "mirror", "modest", "morph", "morris", "nadia", "nato", "navy", "needle", "neuron", "never", "newton", "nice", "night", "nissan", "nitro", "nixon", "north", "oberon", "octavia", "ohio", "olga", "open", "opus", "orca", "oval", "owner", "page", "paint", "palma", "parade", "parent", "parole", "paul", "peace", "pearl", "perform", "phoenix", "phrase", "pierre", "pinball", "place", "plate", "plato", "plume", "pogo", "point", "polite", "polka", "poncho", "powder", "prague", "press", "presto", "pretty", "prime", "promo", "quasi", "quest", "quick", "quiz", "quota", "race", "rachel", "raja", "ranger", "region", "remark", "rent", "reward", "rhino", "ribbon", "rider", "road", "rodent", "round", "rubber", "ruby", "rufus", "sabine", "saddle", "sailor", "saint", "salt", "satire", "scale", "scuba", "season", "secure", "shake", "shallow", "shannon", "shave", "shelf", "sherman", "shine", "shirt", "side", "sinatra", "sincere", "size", "slalom", "slow", "small", "snow", "sofia", "song", "sound", "south", "speech", "spell", "spend", "spoon", "stage", "stamp", "stand", "state", "stella", "stick", "sting", "stock", "store", "sunday", "sunset", "support", "sweden", "swing", "tape", "think", "thomas", "tictac", "time", "toast", "tobacco", "tonight", "torch", "torso", "touch", "toyota", "trade", "tribune", "trinity", "triton", "truck", "trust", "type", "under", "unit", "urban", "urgent", "user", "value", "vendor", "venice", "verona", "vibrate", "virgo", "visible", "vista", "vital", "voice", "vortex", "waiter", "watch", "wave", "weather", "wedding", "wheel", "whiskey", "wisdom", "deal", "null", "nurse", "quebec", "reserve", "reunion", "roof", "singer", "verbal", "amen", "ego", "fax", "jet", "job", "rio", "ski", "yes", }