BNFC-2.9.5/0000755000000000000000000000000007346545000010430 5ustar0000000000000000BNFC-2.9.5/BNFC.cabal0000644000000000000000000002152407346545000012130 0ustar0000000000000000Name: BNFC Version: 2.9.5 cabal-version: 2.0 -- >=2.0 for build-tool-depends: hspec-discover -- Andreas, 2022-12-16, issue #429: -- Putting cabal-version: x.y could mean putting extra-deps: [Cabal-x.y] -- into stack-a.b.c.yaml if that snapshot does not have Cabal at least x.y. -- This will then require a Setup.[l]hs file with stack even for build-type: Simple. -- Shunning this extra effort, we only support stack snapshot that have a sufficient Cabal. build-type: Simple category: Parsing Copyright: (c) Andreas Abel, Jonas Almström Duregård, Krasimir Angelov, Jean-Philippe Bernardy, Björn Bringert, Johan Broberg, Kyle Butt, Paul Callaghan, Grégoire Détrez, Markus Forsberg, Ola Frid, Peter Gammie, Thomas Hallgren, Pascal Hof, Simon Huber, Patrik Jansson, Kristofer Johannisson, Antti-Juhani Kaijanaho, Andreas Lööw, Justin Meiners, Kent Mein, Ulf Norell, Gabriele Paganelli, Michael Pellauer, Michał Radwański, Fabian Ruch, and Aarne Ranta 2002 - 2023. Free software under the BSD 3-clause license. License: BSD3 License-File: LICENSE Maintainer: bnfc-dev@googlegroups.com Homepage: https://bnfc.digitalgrammars.com/ bug-reports: https://github.com/BNFC/bnfc/issues Synopsis: A compiler front-end generator. Description: The BNF Converter is a compiler construction tool generating a compiler front-end from a Labelled BNF grammar. It was originally written to generate Haskell code, but can also be used for generating Agda, C, C++, Java, Ocaml and XML code. . Given a Labelled BNF grammar the tool produces: an abstract syntax as a Haskell, Agda, C, C++, Ocaml module or Java package, a case skeleton for the abstract syntax in the same language, an Alex, Flex, JLex, JFlex, or ocamllex lexer generator file, a Happy, CUP, Bison, Antlr, ocamlyacc or menhir parser generator file, a pretty-printer as a Haskell, Agda, C, C++, Java, or Ocaml module, an XML representation, a LaTeX file containing a readable specification of the language. -- Support range when build with cabal tested-with: GHC == 9.6.2 GHC == 9.4.5 GHC == 9.2.8 GHC == 9.0.2 GHC == 8.10.7 GHC == 8.8.4 GHC == 8.6.5 GHC == 8.4.4 GHC == 8.2.2 GHC == 8.0.2 GHC == 7.10.3 extra-doc-files: README.md CHANGELOG.md -- LICENSE is automatically included extra-source-files: Makefile src/BNFC.cf src/Makefile -- Support range when build with stack stack-9.6.2.yaml stack-9.4.5.yaml stack-9.2.8.yaml stack-9.0.2.yaml stack-8.10.7.yaml stack-8.8.4.yaml stack-8.6.5.yaml stack-8.4.4.yaml stack-8.2.2.yaml source-repository head type: git location: https://github.com/BNFC/bnfc.git subdir: source executable bnfc default-language: Haskell2010 main-is: Main.hs hs-source-dirs: main build-depends: BNFC , base other-modules: -- Generated by cabal Paths_BNFC default-extensions: -- Keep in alphabetical order. LambdaCase -- The library goal is there for internal reasons: -- It gives us a dependency BNFC which we can use to steer the build order. -- The bundling of the library saves us from compiling every module twice: -- once for the executable, and once for unit-tests. -- Now, both can depend on the library. library default-language: Haskell2010 build-depends: base >= 4.8 && < 5 , array , containers , deepseq , directory , filepath , mtl >= 2.2.1 , pretty >= 1.1 && < 1.2 , process , string-qq , time , transformers -- for Control.Monad.IO.Class, which is in base >= 4.9 but not below if impl(ghc < 8.0) build-depends: semigroups build-tool-depends: alex:alex, happy:happy hs-source-dirs: src ghc-options: -Wall -fno-warn-dodgy-imports -fno-warn-name-shadowing -- for cabal repl -w doctest : -- -fno-warn-type-defaults if impl(ghc >= 8.0) ghc-options: -Wcompat if impl(ghc >= 9.2) ghc-options: -Wno-incomplete-uni-patterns default-extensions: -- Keep the list of language extensions in sync with its other occurrences. -- Keep in alphabetical order. -- No CPP since doctest doesn't like it. DefaultSignatures DoAndIfThenElse FlexibleContexts FlexibleInstances LambdaCase MultiWayIf NamedFieldPuns OverloadedStrings PatternGuards RecordWildCards ScopedTypeVariables TupleSections TypeOperators -- 2021-01-22 The following autogen section seems wrong, breaks Haskell CI: -- E.g. https://github.com/BNFC/bnfc/runs/1750769442?check_suite_focus=true -- autogen-modules: -- -- Generated by cabal -- Paths_BNFC -- -- Generated by alex -- BNFC.Lex -- -- Generated by happy -- BNFC.Par other-modules: -- Generated by cabal Paths_BNFC exposed-modules: -- Generated from LICENSE BNFC.License -- Generated by bnfc BNFC.Abs BNFC.Lex BNFC.Par BNFC.Print -- BNFC core BNFC.Utils BNFC.CF BNFC.Check.EmptyTypes BNFC.Regex BNFC.TypeChecker BNFC.GetCF BNFC.Lexing BNFC.Backend.Base BNFC.Backend.Common BNFC.Backend.Common.Makefile BNFC.Backend.Common.NamedVariables BNFC.Backend.Common.OOAbstract BNFC.Backend.Common.StrUtils BNFC.Options BNFC.PrettyPrint -- Documentation backends BNFC.Backend.Latex BNFC.Backend.Txt2Tag -- Haskell backend BNFC.Backend.Haskell BNFC.Backend.Haskell.CFtoTemplate BNFC.Backend.Haskell.CFtoAlex3 BNFC.Backend.Haskell.CFtoHappy BNFC.Backend.Haskell.CFtoPrinter BNFC.Backend.Haskell.CFtoAbstract BNFC.Backend.Haskell.CFtoLayout BNFC.Backend.Haskell.MkErrM BNFC.Backend.Haskell.HsOpts BNFC.Backend.Haskell.Utils -- Haskell GADT BNFC.Backend.HaskellGADT BNFC.Backend.HaskellGADT.HaskellGADTCommon BNFC.Backend.HaskellGADT.CFtoTemplateGADT BNFC.Backend.HaskellGADT.CFtoAbstractGADT -- O'Caml backend BNFC.Backend.OCaml BNFC.Backend.OCaml.OCamlUtil BNFC.Backend.OCaml.CFtoOCamlTest BNFC.Backend.OCaml.CFtoOCamlShow BNFC.Backend.OCaml.CFtoOCamlPrinter BNFC.Backend.OCaml.CFtoOCamlTemplate BNFC.Backend.OCaml.CFtoOCamlAbs BNFC.Backend.OCaml.CFtoOCamlYacc BNFC.Backend.OCaml.CFtoOCamlLex -- C backend BNFC.Backend.C BNFC.Backend.C.Common BNFC.Backend.C.CFtoCPrinter BNFC.Backend.C.CFtoCSkel BNFC.Backend.C.CFtoBisonC BNFC.Backend.C.CFtoFlexC BNFC.Backend.C.CFtoCAbs BNFC.Backend.C.RegToFlex -- C++ backend BNFC.Backend.CPP.Common BNFC.Backend.CPP.PrettyPrinter BNFC.Backend.CPP.Makefile BNFC.Backend.CPP.Naming BNFC.Backend.CPP.NoSTL BNFC.Backend.CPP.NoSTL.CFtoCPPAbs -- C++ STL backend BNFC.Backend.CPP.STL BNFC.Backend.CPP.STL.CFtoSTLAbs BNFC.Backend.CPP.STL.STLUtils BNFC.Backend.CPP.STL.CFtoCVisitSkelSTL -- Java backend BNFC.Backend.Java BNFC.Backend.Java.CFtoAntlr4Lexer BNFC.Backend.Java.CFtoAntlr4Parser BNFC.Backend.Java.CFtoJavaAbs15 BNFC.Backend.Java.CFtoAllVisitor BNFC.Backend.Java.CFtoFoldVisitor BNFC.Backend.Java.CFtoAbstractVisitor BNFC.Backend.Java.CFtoComposVisitor BNFC.Backend.Java.CFtoVisitSkel15 BNFC.Backend.Java.CFtoJavaPrinter15 BNFC.Backend.Java.CFtoJLex15 BNFC.Backend.Java.CFtoCup15 BNFC.Backend.Java.RegToJLex BNFC.Backend.Java.RegToAntlrLexer BNFC.Backend.Java.Utils -- XML backend BNFC.Backend.XML -- Pygments backend BNFC.Backend.Pygments -- Agda backend BNFC.Backend.Agda ----- Testing -------------------------------------------------------------- test-suite unit-tests default-language: Haskell2010 type: exitcode-stdio-1.0 build-depends: BNFC, -- base-4.9 would be needed for Show1 needed for Show WriterT base, mtl, directory, array, process, filepath, pretty, hspec, QuickCheck >= 2.5, HUnit, temporary, containers, deepseq, string-qq, time if impl(ghc < 8.0) build-depends: semigroups build-tool-depends: alex:alex, happy:happy, hspec-discover:hspec-discover main-is: unit-tests.hs hs-source-dirs: test ghc-options: -W -- TODO: -Wall -fno-warn-name-shadowing if impl(ghc >= 8.0) ghc-options: -Wcompat default-extensions: -- Keep in alphabetical order. LambdaCase OverloadedStrings other-modules: BNFC.CFSpec BNFC.GetCFSpec BNFC.OptionsSpec BNFC.Hspec BNFC.Backend.BaseSpec BNFC.Backend.Common.MakefileSpec BNFC.Backend.Common.NamedVariablesSpec BNFC.Backend.CPP.NoSTLSpec BNFC.Backend.CPP.STLSpec BNFC.Backend.CSpec BNFC.Backend.HaskellSpec BNFC.Backend.HaskellGADTSpec BNFC.Backend.Haskell.CFtoHappySpec BNFC.Backend.JavaSpec BNFC.Backend.LatexSpec BNFC.Backend.OCamlSpec -- Generated by cabal Paths_BNFC autogen-modules: -- Generated by cabal Paths_BNFC BNFC-2.9.5/CHANGELOG.md0000644000000000000000000002563007346545000012247 0ustar0000000000000000# 2.9.5 Andreas Abel , July 2023 * C/C++/Java: escape newline etc. when printing `String` and `Char` literals [[#449](https://github.com/BNFC/bnfc/issues/449)] * Java/ANTLR: unescape `String` and `Char` literals in parser (needs Java ≥ 15) [[#451](https://github.com/BNFC/bnfc/issues/451)] * Java/ANTLR: fix case problem with language names like `C` [[#455](https://github.com/BNFC/bnfc/issues/455)] * Java with line numbers: compatibility with `jflex` ≥ 1.8 [[#453](https://github.com/BNFC/bnfc/issues/453)] * Haskell/GADT: generated `ComposOp.hs` no longer needs `mtl` library [[#438](https://github.com/BNFC/bnfc/pull/438)] * Ocaml: fixed a crash in printer with unbalanced `}` [[#439](https://github.com/BNFC/bnfc/issues/439)] * Ocaml: lex escape sequences in `Char` [[#452](https://github.com/BNFC/bnfc/issues/452)] * Ocaml: removed superfluous `let rec` in the printers for token categories Tested GHC versions: * with `cabal`, GHC 7.10 - 9.6 * with `stack`, GHC 8.2 - 9.6 # 2.9.4.1 Andreas Abel , December 2022 * C/C++: allow space characters in token definitions again [[#431](https://github.com/BNFC/bnfc/issues/431)] (regression in 2.9.0) * installation: get rid of `cabal-doctest` dependency [[#429](https://github.com/BNFC/bnfc/issues/429)] Tested GHC versions: * with `cabal`, GHC 7.10 - 9.4 * with `stack`, GHC 8.2 - 9.4 # 2.9.4 Andreas Abel , February 2022 * LBNF: empty token types are now forbidden [#388] * Agda: support position information via `--functor` [#405] * C/C++: use `size_t` and `-Wsign-conversion` [#391] * C++: repair broken `--line-numbers` [#390], regression in 2.9.2 by [#349] * Haskell: fix a problem with layout stop words and top-level layout [#399,#413] * Haskell: generated test parser can parse several files now [#400] * Java: use _L_`.valueOf()` instead of deprecated `new `_L_`()` for literal classes _L_ [#402] * Ocaml: non-terminals in generated parser are now type-annotated [#407] * Ocaml: sanitize bound variables in `define` * Ocaml/Menhir: update parse error mechanism to Menhir 2021/12/30 [#414] Contributors: * Michał Radwański [#390,#391] * Beatrice Vergani [#399,#400] # 2.9.3 Andreas Abel , September 2021 * BNFC now uniformly signs generated files with its version number [#373] * C/C++: include `stdio.h` in parser header files [#381] * C++: fixed parser regression in 2.9.2: missing `#include ` [#377] * Ocaml: lex CR as whitespace [see also #376] * Ocaml: correct position in parse errors [#380] * Ocaml/Haskell: make printer for lists categories total [#383] # 2.9.2 Andreas Abel , June 2021 ## Major improvements * Haskell: layout keywords can now be stacked on the same line [#354], see https://bnfc.readthedocs.io/en/latest/lbnf.html#stacking-layout-keywords * C: new methods `free_*` and `clone_*` to deallocate and clone syntax trees [#348] * C/C++ backends now create reentrant parsers [#349] ## Bug fixes and small improvements * Haskell-GADT: generated code is warning free [#346] * Haskell: fixes in layout preprocessor [#343,#344,#345,#352,#353] * Haskell: print `[Char]` correctly, removed method `prtList` [#359] * Haskell: added missing import [#368], regression introduced in 2.9.1 by [#331] * C: fixed a space leak when parsing from a string in memory [#347] * C: removed errorneous `define`d constructors from `Skeleton.c` * C++: `define`d constructors now reside in `Absyn` [#287] * Java: `define`d constructor now reside in `AbsynDef.java` [#287] * Ocaml: fixed translation of nested `define`d constructors * C/C++/Java: Pre/post/mixfix lists are now printed correctly [#358] * all: `define`d constructors involving list expressions work now [#363] * all: printers render braces on their own line [#366] ## Cosmetical changes * C/C++: instead of `_SYMB_nnn`, more readable token names in lexer & parser # 2.9.1 Andreas Abel , March 2021 ## Main new feature * Haskell: the `--functor` option now produces position-annotated ASTs [#176,#327]. Thanks @Commelina! ## Bug fixes and small improvements * Haskell: fix generated `Makefile` and test parser for `--glr` mode [#340] * Haskell(/GADT): generated modules import `Prelude` explicitly, compatible with `{-# LANGUAGE NoImplicitPrelude #-}` * Haskell: generated code is warning free [#331] * Haskell: generated printer more robust wrt. identifier clashes [#337] * Haskell/C: handle token constructors in `define` expressions [#338] * Java/ANTLR: removed more superfluous quotation in lexer character sets [#329] * Ocaml: fix syntax error in generated printer [#330] * LBNF: more sanity checks [#339] * Tested with GHC 9.0 # 2.9.0 Andreas Abel , December 2020 ## Major changes * New license: BSD 3-clause [#308] * LBNF: removed `delimiters` pragma [#308] * Haskell: removed options `--alex1`, `--alex2`, `--sharestrings`, `--profile`, and `--cnf` [#265] * C#: backend removed [#265] ## Bug fixes * LBNF: allow list categories in `entrypoints` pragma [#306] * LBNF: report clashes between token and ordinary categories [#323] * C: `strdup` is not part of C89 standard, `_POSIX_C_SOURCE` required [#318] * C/C++: fixed buffer overrun in `String` literal lexer [#316] * C++: fixed regressions (one of them #310) introduced by #288 * C/C++/OCaml: allow unicode characters in token definitions [#324] * C/OCaml: sanitize grammar file names [#325] * Java/ANTLR: removed superfluous quotation in lexer character sets [#319] # 2.8.4 Andreas Abel , October 2020 * GHC versions 7.10 - 8.10 supported, dropped GHC 7.6 and 7.8 * LBNF: whitespace list separators are now accepted; treated like "" [#70] * `define` pragma now implemented by all maintained backends [#266, #285] * BNFC performs more sanity checks on the grammar, reports errors with source locations [#186, #213, #214] * option `--check` to only perform sanity checks [#286] * Backends now more resilient against keyword and name clashes (e.g. via qualified imports) [#278, #289] * Haskell: new option `--text-token` to use `Data.Text` instead of `String` in the lexer [#167] * Haskell: allow block comment delimiters of any length [#169, #202] * Haskell: define `Err` monad as `Either String` [#273], migration guide at https://bnfc.readthedocs.io/en/v2.8.4/user_guide.html#haskell-backend * Haskell: `IsString` instances for `Ident`-like token types [#192] * C/C++/Java: support multiple block comment forms per grammar [#202] * C++(STL): parser now throws exception on parse error [#288] * C++: fixed quadratic behavior in C++-generated printer (regression in 2.8.2) * Java: escape JFlex special characters [#299] * Java/ANTLR: emit start rules to work around ANTLR issue #2689 [#272] * Ocaml: new flag `--menhir` to generate parser with menhir instead of ocamlyacc * Bug fixes: #163, #169, #196, #212, #235, #256, #264, #269, #274, #276, #277, #279, #282, #283, #290 * Fact-checking and revising LBNF documentation at https://bnfc.readthedocs.io/en/v2.8.4/lbnf.html # 2.8.3 Andreas Abel , August 2019 * GHC 8.8 compatibility * Stack installation supported by provided .yaml files [#198] * Unicode support in lexer, excl. C, C++ and Ocaml [#249] * LBNF: support \r (carriage return) and \f (form feed) in token declaration [#257] * LBNF: allow numbered categories (like Foo2) independent of coercions [#210] * Agda: new (experimental) backend, providing bindings to AST/parser/printer of Haskell backend * C: supports now the -l option to give error locations [#238] * C: correct function names in generated skeletons [#239] * C, C++: handle regular expression difference (subtraction) correctly [#237] * Haskell: generates now more fine-grained Makefile that invokes bnfc on changed .cf file * Haskell: use qualified import of AST in generated printer [#128,#228] * Haskell: printer code no longer produces deprecation warning concerning OverlappingInstances [#233] * Haskell/CNF: fixed problem with Any type in generated code [#216] * Haskell/CNF: generated test program now same name (Test) as w/o --cnf * Haskell/GLR: correct module header in .y file [#252] # 2.8.2 Andreas Abel , November 2018 * GHC 8.4 compatibility [#227,#236] * bnfc now puts current timestamp on all created files, even unchanged ones [#219] * bnfc is now more accepting about border-line terminals [#149] * Improved error messages [#144] in particular on undefined categories [#151] * C: Emit correct function prototypes [#185] * C++: Fix buffer overrun in pretty printer [#242] * C++: Fix regression introduced in 2.8 in Skeleton.H [#164] * C++: Replace `%name-prefix` with `%define api.prefix` in bison files [#181] * C++: Fix a bug that appeared if you had a category named "List" * C, C++: Add usage output to the test program [#141] * C, C++: Fix a bug in the parser file when using -p [#172] * C, C++, OCaml, Java: Accept ' or " as comment delimiters [#146] * Haskell: Generated code mostly hlint-warning free * Haskell: Small fixes [#166,#170,#222] * Java: Add an experimental ANTLR4 backend [#155] * Java: Add support for passing line numbers to parser [#217,#224,#226] * OCaml: Reserved symbols now have a higher precedence than predefined tokens as in Haskell * Some updates of the documentation [#211,#223] * And various small fixes [#139,#159,#195,#201,#215] # 2.8.1 Grégoire Détrez , February 2016 * Fix compatibility with GHC 7.10.2 and Alex 3.14 * Fixed #160 # 2.8 Grégoire Détrez , May 2015 * Builds with ghc 7.10.1 * Add support for JFlex (java) * Add an option to generate files in an other directory * Add an experimental option that turns the AST into a parametrized functor (in Haskell only) * New pygment backend to generate syntax highlighters * Bug fixes # 2.7.1 Grégoire Détrez , October 2014 * Generated haskell code is now warning free * Removed unused terminal in happy * Correctly escape backslashes in symbols * Fix problem that was preventing custom tokens to work in OCaml if they conflict with the build-in Ident * BNFC build is also warning free (ghc 7.4.2) * Test programs return non-zerro exit code on parse error # 2.7.0.0 Grégoire Détrez , September 2014 * Add token support for Ocaml * New option parser * Adds an optional argument to change Makefile name * Add a --ghc option to derive Data, Typeable, Generic in Haskell * New online documentation (https://bnfc.readthedocs.org) * Derive ``Read`` for newtype decls in Haskell * New option to get the version number --version * Remove the F# backend * Remove the Java4 backend * New Applicative and Alternative instances to ``Err`` * Remove the coupling between building the parser and the pdf from latex * Improvement to the CNF Backend * Bug fixes #92, #21, #34, #33, #90, #30, #60 # 2.5.0 Grégoire Détrez, April 2013 # 2.6.0.3 Grégoire Détrez, January 2013 # 2.4.2.1 Andreas Abel , July 2012 # 2.4.2.0 Thomas Hallgren, September 2010 # 2.4.1.1 Markus Forsberg, September 2010 BNFC-2.9.5/LICENSE0000644000000000000000000000352607346545000011443 0ustar0000000000000000Copyright 2002-2021 Andreas Abel, Jonas Almström Duregård, Krasimir Angelov, Jean-Philippe Bernardy, Björn Bringert, Johan Broberg, Kyle Butt, Paul Callaghan, Grégoire Détrez, Markus Forsberg, Ola Frid, Peter Gammie, Thomas Hallgren, Pascal Hof, Simon Huber, Patrik Jansson, Kristofer Johannisson, Antti-Juhani Kaijanaho, Andreas Lööw, Justin Meiners, Kent Mein, Ulf Norell, Gabriele Paganelli, Michael Pellauer, Fabian Ruch, and Aarne Ranta. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.BNFC-2.9.5/Makefile0000644000000000000000000000450307346545000012072 0ustar0000000000000000# GHC major.minor GHC_VERSION := $(shell ghc --numeric-version | cut -d. -f1-2) BNFC_VERSION=$(shell sed -ne "s/^[Vv]ersion: *\([0-9.]*\).*/\1/p" BNFC.cabal) # Cabal options (to be overwritten from the command line) CABAL_OPTS = CABAL_BUILDDIR_SUFFIX= CABAL_BUILD_OPTS = --enable-tests # --builddir=dist-ghc-$(GHC_VERSION)$(CABAL_BUILDDIR_SUFFIX) CABAL_CONFIGURE_OPTS = --enable-tests CABAL_INSTALL_OPTS = $(CABAL_CONFIGURE_OPTS) $(CABAL_BUILD_OPTS) --overwrite-policy=always CABAL_TEST_OPTS = $(CABAL_BUILD_OPTS) CABAL = cabal $(CABAL_OPTS) CABAL_CONFIGURE = $(CABAL) configure $(CABAL_CONFIGURE_OPTS) CABAL_BUILD = $(CABAL) build $(CABAL_BUILD_OPTS) CABAL_INSTALL = $(CABAL) install $(CABAL_INSTALL_OPTS) CABAL_TEST = $(CABAL) test $(CABAL_TEST_OPTS) # Name for binary distribution (e.g. bnfc-2.4.5-linux32) BDIST_TAG=bnfc-${BNFC_VERSION}-$(shell uname -s)-$(shell uname -m) .PHONY: default build install doc test bdist show-version debug weed TAGS default: build cabal-test doctest-quick build: $(CABAL_BUILD) install: $(CABAL_INSTALL) test: build cabal-test doctest cabal-test: $(CABAL_TEST) doctest: build doctest-install doctest-quick doctest-install: cabal install doctest --program-suffix=-${GHC_VERSION} doctest-quick: cabal repl -w doctest-${GHC_VERSION} --repl-options=-Wno-type-defaults # --ghc-options=-Wno-type-defaults needed due to OverloadedStrings. # But it does not get used here, needs to go into cabal file. # see: https://github.com/sol/doctest/issues/390 # --repl-options seems to work, though haddock: $(CABAL) haddock # See https://hackage.haskell.org/package/weeder # weeder can find dead code starting from the .hie files weed: $(CABAL) build --project-file=cabal.project.local weeder TAGS : hasktags --etags . # Binary package (tgz, for linux) bdist: dist/${BDIST_TAG}.tar.gz dist/%.tar.gz: cabal v1-clean cabal v1-install ${CABAL_OPTS} --only-dependencies cabal v1-configure ${CABAL_OPTS} --prefix=/ cabal v1-build ${CABAL_OPTS} cabal v1-copy --destdir=dist/install mkdir dist/$* cp dist/install/bin/bnfc dist/$* cp LICENSE dist/$* tar -cvz -C dist $* > $@ # Print the bnfc version from the cabal file show-version: @echo ${BNFC_VERSION} debug: @echo GHC_VERSION = $(GHC_VERSION) @echo BNFC_VERSION = $(BNFC_VERSION) # EOF BNFC-2.9.5/README.md0000644000000000000000000001375007346545000011715 0ustar0000000000000000[![Hackage version](https://img.shields.io/hackage/v/BNFC.svg?label=Hackage)](http://hackage.haskell.org/package/BNFC) [![BNFC on Stackage Nightly](https://stackage.org/package/BNFC/badge/nightly)](https://stackage.org/nightly/package/BNFC) [![Stackage LTS version](https://www.stackage.org/package/BNFC/badge/lts?label=Stackage)](https://www.stackage.org/package/BNFC) [![Build status](https://github.com/BNFC/bnfc/workflows/Haskell-CI/badge.svg)](https://github.com/BNFC/bnfc/actions) [![Documentation status](https://readthedocs.org/projects/bnfc/badge/?version=latest)](http://bnfc.readthedocs.io/en/latest/?badge=latest) The BNF Converter ================= What is the BNF Converter? -------------------------- The BNF Converter (bnfc) is a compiler construction tool generating a compiler front-end from a Labelled BNF (LBNF) grammar. It is currently able to generate Haskell, Agda, C, C++, Java, and OCaml, as well as XML representations. Given a LBNF grammar the tool produces: - an abstract syntax implementation - a case skeleton for the abstract syntax in the same language - an Alex, Ocamllex, JLex, or Flex lexer generator file - a Happy, Ocamlyacc, Menhir, ANTLR, CUP, or Bison parser generator file - a pretty-printer as a Haskell/Agda/C/C++/Java/Ocaml module - a Latex file containing a readable specification of the language *More information*: http://bnfc.digitalgrammars.com/ Installation ------------ Some binaries are available at https://github.com/BNFC/bnfc/releases. Installation from the Haskell sources is possible via `stack` or `cabal`. ### Installation via stack (recommended) You need a running installation of [stack](https://docs.haskellstack.org/en/stable/install_and_upgrade/). To install and run the latest version of [bnfc from stackage](https://www.stackage.org/package/BNFC), enter at the command line: ``` stack install BNFC bnfc --help ``` ### Installation via cabal You need a running installation of a recent version of [GHC](https://www.haskell.org/ghc/) and [Cabal](https://www.haskell.org/cabal/), most easily available via the [GHCup](https://www.haskell.org/ghcup/). To install [bnfc from hackage](https://hackage.haskell.org/package/BNFC), enter at the command line: ``` cabal install BNFC bnfc --help ``` ### Installing the development version To install the [development version of bnfc](https://github.com/BNFC/bnfc) with the latest bugfixes (and regressions ;-)): ``` git clone https://github.com/BNFC/bnfc.git cd bnfc/source ``` and then either ``` cabal install ``` or ``` stack install --stack-yaml stack-8.10.7.yaml ``` (replace `8.10.7` with your GHC version, and if you want to build with your installed GHC then add flag `--system-ghc`). Mini tutorial ------------- - Build a first parser in 5 min (Haskell backend): 1. In a fresh directory, prepare a grammar file `Sum.cf` with the following content: ``` EInt. Exp ::= Integer; EPlus. Exp ::= Exp "+" Integer; ``` 2. Build a parser (in Haskell) with bnfc: ``` bnfc -d -m Sum.cf && make ``` The `make` step needs the Haskell compiler [GHC](https://www.haskell.org/ghc/), the lexer generator [alex](https://www.haskell.org/alex/) and the parser generator [happy](https://www.haskell.org/happy/) (all included in the GHC installation). 3. Inspect the generated files in directory `Sum`. 4. Test the parser. ``` echo "1 + 2 + 3" | Sum/Test ``` - Try the C-family backends. (The prerequisites, GNU C(++) compiler (`gcc` / `g++`), lexer generator `flex` and parser generator `bison`, are usually present): ``` bnfc --c -m -o sum-c Sum.cf && make -C sum-c && echo "1 + 2 + 3" | sum-c/TestSum bnfc --cpp -m -o sum-cpp Sum.cf && make -C sum-cpp && echo "1 + 2 + 3" | sum-cpp/TestSum ``` - Try the other backends: | Option | Backend | | --- | --- | | `--java` | Requires Java, [JLex](https://www.cs.princeton.edu/~appel/modern/java/JLex/) or [JFlex](https://jflex.de/), and [CUP](http://www2.cs.tum.edu/projects/cup/).| | `--java-antlr` | Requires [ANTLR](https://www.antlr.org/).| | `--ocaml` | Requires [OCaml](https://ocaml.org/), `ocamllex` and `ocamlyacc`.| | `--ocaml-menhir` | Uses [menhir](http://gallium.inria.fr/~fpottier/menhir/) instead of `ocamlyacc`.| | `--agda` | Produces [Agda](https://agda-lang.org) bindings to the parser generated for Haskell.| | `--pygments` | Produces a lexer definition for the Python highlighting suite [Pygments](https://pygments.org/).| Documentation ------------- https://bnfc.readthedocs.org/en/latest/ Support ------- You can discuss with us issues around bnfc on our mailing list bnfc-dev@googlegroups.com. For current limitations of bnfc, or to report a new bug, please consult our [issue tracker](https://github.com/BNFC/bnfc/issues). Contribute ---------- - Issue Tracker: https://github.com/BNFC/bnfc/issues - Source Code: https://github.com/BNFC/bnfc - Haskell coding style guide: https://github.com/andreasabel/haskell-style-guide/ - Some pull request etiquette: * Document, document, document! (See style guide) * Include test cases that cover your feature. * Include changelog entry. * More etiquette: E.g. https://gist.github.com/mikepea/863f63d6e37281e329f8 License ------- The project is licensed under the [BSD 3-clause license](https://raw.githubusercontent.com/BNFC/bnfc/master/source/LICENSE). BNFC versions until 2.8.4 released under the [GNU General Public License](https://www.gnu.org/licenses/gpl-2.0.html). Example uses of the BNF Converter --------------------------------- In research: - NASA's [OGMA](https://github.com/nasa/ogma) tool uses LBNF for its grammars, e.g. for a [subset of C 99](https://github.com/nasa/ogma/blob/49e78e4d6fa7558d09d36a284648087df48714e4/ogma-language-c/grammar/C.cf). In teaching: - Course [Programming Language Technology](http://www.cse.chalmers.se/edu/course/DAT151/) at Chalmers / Gothenburg University. BNFC-2.9.5/main/0000755000000000000000000000000007346545000011354 5ustar0000000000000000BNFC-2.9.5/main/Main.hs0000644000000000000000000000471707346545000012605 0ustar0000000000000000{- BNF Converter: Main file Copyright (C) 2002-2013 Authors: Jonas Almström Duregård, Krasimir Angelov, Björn Bringert, Johan Broberg, Paul Callaghan, Grégoire Détrez, Markus Forsberg, Ola Frid, Peter Gammie, Thomas Hallgren, Patrik Jansson, Kristofer Johannisson, Antti-Juhani Kaijanaho, Ulf Norell, Michael Pellauer and Aarne Ranta 2002 - 2013. Björn Bringert, Johan Broberg, Markus Forsberg, Peter Gammie, Patrik Jansson, Antti-Juhani Kaijanaho, Ulf Norell, Michael Pellauer, Aarne Ranta -} module Main where import BNFC.Backend.Base import BNFC.Backend.C import BNFC.Backend.CPP.NoSTL import BNFC.Backend.CPP.STL import BNFC.Backend.Haskell import BNFC.Backend.HaskellGADT import BNFC.Backend.Java import BNFC.Backend.Latex import BNFC.Backend.OCaml import BNFC.Backend.Pygments import BNFC.CF (CF) import BNFC.GetCF import BNFC.Options hiding (make, Backend) import BNFC.License ( license ) import Paths_BNFC ( version ) import Data.Version ( showVersion ) import System.Environment (getArgs) import System.Exit (exitFailure, exitSuccess) import System.IO (stderr, hPutStrLn) -- Print an error message and a (short) usage help and exit printUsageErrors :: [String] -> IO () printUsageErrors msg = do mapM_ (hPutStrLn stderr) msg hPutStrLn stderr usage exitFailure main :: IO () main = do args <- getArgs let (mode, warnings) = parseMode args -- Print command-line argument warnings (if any). mapM_ (hPutStrLn stderr) warnings case mode of UsageError e -> printUsageErrors [e] Help -> putStrLn help >> exitSuccess Version -> putStrLn (showVersion version) >> exitSuccess License -> putStr license >> exitSuccess Target options file | target options == TargetCheck -> readFile file >>= parseCF options TargetCheck >> return () | otherwise -> readFile file >>= parseCF options (target options) >>= writeFiles (outDir options) . maketarget (target options) options maketarget :: Target -> SharedOptions -> CF -> Backend maketarget = \case TargetC -> makeC TargetCpp -> makeCppStl TargetCppNoStl -> makeCppNoStl TargetHaskell -> makeHaskell TargetHaskellGadt -> makeHaskellGadt TargetLatex -> makeLatex TargetJava -> makeJava TargetOCaml -> makeOCaml TargetPygments -> makePygments TargetCheck -> error "impossible" BNFC-2.9.5/src/0000755000000000000000000000000007346545000011217 5ustar0000000000000000BNFC-2.9.5/src/BNFC.cf0000644000000000000000000000757607346545000012260 0ustar0000000000000000{- BNF Converter: Language definition Copyright (C) 2004 Author: Markus Forsberg, Michael Pellauer, Aarne Ranta -} -- A Grammar is a sequence of definitions Grammar . Grammar ::= [Def] ; -- separator Def ";" ; -- Note: this still permits a final semicolon. []. [Def] ::= ; (:[]). [Def] ::= Def ; (:). [Def] ::= Def ";" [Def] ; -- extra semicolons allowed _. [Def] ::= ";" [Def] ; -- The rules of the grammar Rule . Def ::= Label "." Cat "::=" [Item] ; -- Items Terminal . Item ::= String ; NTerminal . Item ::= Cat ; terminator Item "" ; -- Categories (non-terminals) ListCat . Cat ::= "[" Cat "]" ; IdCat . Cat ::= Identifier ; separator Cat "," ; -- for "entrypoints" -- Labels Id . Label ::= Identifier ; -- AST constructor Wild . Label ::= "_" ; -- No AST constructor (embedding) ListE . Label ::= "[" "]" ; -- Empty list ListCons . Label ::= "(" ":" ")" ; -- Cons ListOne . Label ::= "(" ":" "[" "]" ")" ; -- Singleton list -- Pragmas Comment . Def ::= "comment" String ; -- Line comment Comments . Def ::= "comment" String String ; -- Block comment Internal . Def ::= "internal" Label "." Cat "::=" [Item] ; -- No parsing, AST and printing only Token. Def ::= "token" Identifier Reg ; -- Lexer token PosToken. Def ::= "position" "token" Identifier Reg ; -- Lexer token with position info Entryp. Def ::= "entrypoints" [Cat] ; -- Names of parsers Separator. Def ::= "separator" MinimumSize Cat String ; -- List Terminator. Def ::= "terminator" MinimumSize Cat String ; -- List Delimiters. Def ::= "delimiters" Cat String String Separation MinimumSize; Coercions. Def ::= "coercions" Identifier Integer ; -- Embeddings and parenthesized exprs. Rules. Def ::= "rules" Identifier "::=" [RHS] ; -- Automatically generated lables (e.g. enums) Function. Def ::= "define" Identifier [Arg] "=" Exp; Arg. Arg ::= Identifier ; separator Arg "" ; -- Lists SepNone. Separation ::= ; SepTerm. Separation ::= "terminator" String; SepSepar. Separation ::= "separator" String; -- Layout Layout. Def ::= "layout" [String] ; -- Layout start keywords LayoutStop. Def ::= "layout" "stop" [String] ; -- Layout stop keywords LayoutTop. Def ::= "layout" "toplevel" ; -- Should the toplevel be a block? separator nonempty String "," ; -- Expressions for "define" pragma Cons. Exp ::= Exp1 ":" Exp ; App. Exp1 ::= Identifier [Exp2] ; Var. Exp2 ::= Identifier ; LitInt. Exp2 ::= Integer ; LitChar. Exp2 ::= Char ; LitString. Exp2 ::= String ; LitDouble. Exp2 ::= Double ; List. Exp2 ::= "[" [Exp] "]" ; coercions Exp 2; separator Exp "," ; -- list list separator nonempty Exp2 "" ; -- argument list RHS. RHS ::= [Item] ; separator nonempty RHS "|" ; -- List size condition MNonempty. MinimumSize ::= "nonempty" ; MEmpty. MinimumSize ::= ; -- Regular expressions RAlt. Reg ::= Reg "|" Reg1 ; -- left-associative RMinus. Reg1 ::= Reg1 "-" Reg2 ; -- left-associative RSeq. Reg2 ::= Reg2 Reg3 ; -- left-associative RStar. Reg3 ::= Reg3 "*" ; RPlus. Reg3 ::= Reg3 "+" ; ROpt. Reg3 ::= Reg3 "?" ; REps. Reg3 ::= "eps" ; -- empty string, same as {""} RChar. Reg3 ::= Char ; -- single character RAlts. Reg3 ::= "[" String "]" ; -- list of alternative characters RSeqs. Reg3 ::= "{" String "}" ; -- character sequence RDigit. Reg3 ::= "digit" ; RLetter. Reg3 ::= "letter" ; RUpper. Reg3 ::= "upper" ; RLower. Reg3 ::= "lower" ; RAny. Reg3 ::= "char" ; coercions Reg 3; -- LBNF identifiers position token Identifier letter (letter | digit | '_')* ; -- Comments in BNF source comment "--" ; comment "{-" "-}" ; BNFC-2.9.5/src/BNFC/0000755000000000000000000000000007346545000011727 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Abs.hs0000644000000000000000000000477707346545000013007 0ustar0000000000000000-- File generated by the BNF Converter (bnfc 2.9.3). {-# LANGUAGE PatternSynonyms #-} -- | The abstract syntax of language BNFC. module BNFC.Abs where import Prelude (Char, Double, Integer, String) import qualified Prelude as C ( Eq, Ord, Show, Read , Int, Maybe(..) ) data Grammar = Grammar [Def] deriving (C.Eq, C.Ord, C.Show, C.Read) data Def = Rule Label Cat [Item] | Comment String | Comments String String | Internal Label Cat [Item] | Token Identifier Reg | PosToken Identifier Reg | Entryp [Cat] | Separator MinimumSize Cat String | Terminator MinimumSize Cat String | Delimiters Cat String String Separation MinimumSize | Coercions Identifier Integer | Rules Identifier [RHS] | Function Identifier [Arg] Exp | Layout [String] | LayoutStop [String] | LayoutTop deriving (C.Eq, C.Ord, C.Show, C.Read) data Item = Terminal String | NTerminal Cat deriving (C.Eq, C.Ord, C.Show, C.Read) data Cat = ListCat Cat | IdCat Identifier deriving (C.Eq, C.Ord, C.Show, C.Read) data Label = Id Identifier | Wild | ListE | ListCons | ListOne deriving (C.Eq, C.Ord, C.Show, C.Read) data Arg = Arg Identifier deriving (C.Eq, C.Ord, C.Show, C.Read) data Separation = SepNone | SepTerm String | SepSepar String deriving (C.Eq, C.Ord, C.Show, C.Read) data Exp = Cons Exp Exp | App Identifier [Exp] | Var Identifier | LitInt Integer | LitChar Char | LitString String | LitDouble Double | List [Exp] deriving (C.Eq, C.Ord, C.Show, C.Read) data RHS = RHS [Item] deriving (C.Eq, C.Ord, C.Show, C.Read) data MinimumSize = MNonempty | MEmpty deriving (C.Eq, C.Ord, C.Show, C.Read) data Reg = RAlt Reg Reg | RMinus Reg Reg | RSeq Reg Reg | RStar Reg | RPlus Reg | ROpt Reg | REps | RChar Char | RAlts String | RSeqs String | RDigit | RLetter | RUpper | RLower | RAny deriving (C.Eq, C.Ord, C.Show, C.Read) newtype Identifier = Identifier ((C.Int, C.Int), String) deriving (C.Eq, C.Ord, C.Show, C.Read) -- | Start position (line, column) of something. type BNFC'Position = C.Maybe (C.Int, C.Int) pattern BNFC'NoPosition :: BNFC'Position pattern BNFC'NoPosition = C.Nothing pattern BNFC'Position :: C.Int -> C.Int -> BNFC'Position pattern BNFC'Position line col = C.Just (line, col) -- | Get the start position of something. class HasPosition a where hasPosition :: a -> BNFC'Position instance HasPosition Identifier where hasPosition (Identifier (p, _)) = C.Just p BNFC-2.9.5/src/BNFC/Backend/0000755000000000000000000000000007346545000013256 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/Agda.hs0000644000000000000000000010362107346545000014451 0ustar0000000000000000{-# LANGUAGE CPP #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TupleSections #-} -- #if __GLASGOW_HASKELL__ >= 800 -- {-# OPTIONS_GHC -fno-warn-unused-top-binds #-} -- #endif -- | Agda backend. -- -- Generate bindings to Haskell data types for use in Agda. -- -- Example for abstract syntax generated in Haskell backend: -- -- > module CPP.Abs where -- > -- > import Prelude (Char, Double, Integer, String) -- > import qualified Prelude as C (Eq, Ord, Show, Read) -- > -- > import qualified Data.Text -- > -- > newtype Ident = Ident Data.Text.Text -- > deriving (C.Eq, C.Ord, C.Show, C.Read) -- > -- > data Def = DFun Type Ident [Arg] [Stm] -- > deriving (C.Eq, C.Ord, C.Show, C.Read) -- > -- > data Arg = ADecl Type Ident -- > deriving (C.Eq, C.Ord, C.Show, C.Read) -- > -- > data Stm -- > = SExp Exp -- > | SInit Type Ident Exp -- > | SBlock [Stm] -- > | SIfElse Exp Stm Stm -- > deriving (C.Eq, C.Ord, C.Show, C.Read) -- > -- > data Exp -- > -- > data Type = Type_bool | Type_int | Type_double | Type_void -- > deriving (C.Eq, C.Ord, C.Show, C.Read) -- -- This should be accompanied by the following Agda code: -- -- > module CPP.AST where -- > -- > open import Agda.Builtin.Char using () renaming (Char to Char) -- > open import Agda.Builtin.Float public using () renaming (Float to Double) -- > open import Agda.Builtin.Int public using () renaming (Int to Integer) -- > open import Agda.Builtin.List using () renaming (List to #List) -- > open import Agda.Builtin.String using () renaming -- > ( String to #String -- > ; primStringFromList to #stringFromList -- > ) -- > -- > {-# FOREIGN GHC import Prelude (Char, Double, Integer, String) #-} -- > {-# FOREIGN GHC import qualified Data.Text #-} -- > {-# FOREIGN GHC import qualified CPP.Abs #-} -- > {-# FOREIGN GHC import CPP.Print (printTree) #-} -- > -- > data Ident : Set where -- > ident : #String → Ident -- > -- > {-# COMPILE GHC Ident = data CPP.Abs.Ident (CPP.Abs.Ident) #-} -- > -- > data Def : Set where -- > dFun : (t : Type) (x : Ident) (as : List Arg) (ss : List Stm) → Def -- > -- > {-# COMPILE GHC Def = data CPP.Abs.Def (CPP.Abs.DFun) #-} -- > -- > data Arg : Set where -- > aDecl : (t : Type) (x : Ident) → Arg -- > -- > {-# COMPILE GHC Arg = data CPP.Abs.Arg (CPP.Abs.ADecl) #-} -- > -- > data Stm : Set where -- > sExp : (e : Exp) → Stm -- > sInit : (t : Type) (x : Ident) (e : Exp) → Stm -- > sBlock : (ss : List Stm) → Stm -- > sIfElse : (e : Exp) (s s' : Stm) → Stm -- > -- > {-# COMPILE GHC Stm = data CPP.Abs.Stm -- > ( CPP.Abs.SExp -- > | CPP.Abs.SInit -- > | CPP.Abs.SBlock -- > | CPP.Abs.SIfElse -- > ) #-} -- > -- > data Type : Set where -- > typeBool typeInt typeDouble typeVoid : Type -- > -- > {-# COMPILE GHC Type = data CPP.Abs.Type -- > ( CPP.Abs.Type_bool -- > | CPP.Abs.Type_int -- > | CPP.Abs.Type_double -- > | CPP.Abs.Type_void -- > ) #-} -- > -- > -- Binding the BNFC pretty printers. -- > -- > printIdent : Ident → String -- > printIdent (ident s) = String.fromList s -- > -- > postulate -- > printType : Type → String -- > printExp : Exp → String -- > printStm : Stm → String -- > printArg : Arg → String -- > printDef : Def → String -- > printProgram : Program → String -- > -- > {-# COMPILE GHC printType = \ t -> Data.Text.pack (printTree (t :: CPP.Abs.Type)) #-} -- > {-# COMPILE GHC printExp = \ e -> Data.Text.pack (printTree (e :: CPP.Abs.Exp)) #-} -- > {-# COMPILE GHC printStm = \ s -> Data.Text.pack (printTree (s :: CPP.Abs.Stm)) #-} -- > {-# COMPILE GHC printArg = \ a -> Data.Text.pack (printTree (a :: CPP.Abs.Arg)) #-} -- > {-# COMPILE GHC printDef = \ d -> Data.Text.pack (printTree (d :: CPP.Abs.Def)) #-} -- > {-# COMPILE GHC printProgram = \ p -> Data.Text.pack (printTree (p :: CPP.Abs.Program)) #-} module BNFC.Backend.Agda (makeAgda) where import Prelude hiding ((<>)) import Control.Monad.State hiding (when) import Data.Bifunctor (second) import Data.Char import Data.Function (on) import qualified Data.List as List import Data.List.NonEmpty (NonEmpty((:|))) import qualified Data.List.NonEmpty as List1 import Data.Map (Map) import qualified Data.Map as Map import Data.Maybe import Data.Set (Set) import qualified Data.Set as Set import Data.String (IsString) import BNFC.CF import BNFC.Backend.Base (Backend, mkfile) import BNFC.Backend.Haskell.HsOpts import BNFC.Backend.Haskell.CFtoAbstract (DefCfg(..), definedRules') import BNFC.Backend.Haskell.Utils (parserName, catToType, comment) import BNFC.Options (SharedOptions, TokenText(..), tokenText, functor) import BNFC.PrettyPrint import BNFC.Utils (ModuleName, replace, when, table) -- | How to print the types of constructors in Agda? data ConstructorStyle = UnnamedArg -- ^ Simply typed, like @E → S → S → S@. | NamedArg -- ^ Dependently typed, like @(e : E) (s₁ s₂ : S) → S@. -- | Import the builtin numeric types (content of some token categories)? data ImportNumeric = YesImportNumeric -- ^ Import the numeric types. | NoImportNumeric -- ^ Don't import the numeric types. deriving (Eq) -- | Entry-point for Agda backend. makeAgda :: String -- ^ Current time. -> SharedOptions -- ^ Options. -> CF -- ^ Grammar. -> Backend makeAgda time opts cf = do -- Generate AST bindings. mkfile (agdaASTFile opts) comment $ cf2AgdaAST time (functor opts) (tokenText opts) (agdaASTFileM opts) (absFileM opts) (printerFileM opts) cf -- Generate parser bindings. mkfile (agdaParserFile opts) comment $ cf2AgdaParser time (tokenText opts) (agdaParserFileM opts) (agdaASTFileM opts) (errFileM opts) (happyFileM opts) layoutMod parserCats -- Generate an I/O library for the test parser. mkfile (agdaLibFile opts) comment $ agdaLibContents (agdaLibFileM opts) -- Generate test parser. mkfile (agdaMainFile opts) comment $ agdaMainContents (agdaMainFileM opts) (agdaLibFileM opts) (agdaASTFileM opts) (agdaParserFileM opts) (hasLayout cf) (firstEntry cf) where -- | Generate parsers for the following non-terminals. -- This includes parsers for 'CoercCat' and 'ListCat'. parserCats :: [Cat] parserCats = List1.toList $ allEntryPoints cf -- | In case the grammar makes use of layout, pass also the generated layout Haskell module. layoutMod :: Maybe String layoutMod = when (hasLayout cf) $ Just (layoutFileM opts) -- | Generate AST bindings for Agda. -- cf2AgdaAST :: String -- ^ Current time. -> Bool -- ^ Include positions information in the AST? (`--functor`) -> TokenText -> String -- ^ Module name. -> String -- ^ Haskell Abs module name. -> String -- ^ Haskell Print module name. -> CF -- ^ Grammar. -> Doc cf2AgdaAST time havePos tokenText mod amod pmod cf = vsep $ [ preamble time "abstract syntax data types" , hsep [ "module", text mod, "where" ] , imports YesImportNumeric False usesPos havePos , when usesString $ hsep [ "String", equals, listT, charT ] , importPragmas tokenText usesPos [ unwords [ "qualified", amod ] , unwords [ pmod, "(printTree)" ] ] , when usesPos defineIntAndPair , when havePos defineBNFCPosition , vsep $ map (uncurry $ prToken amod tokenText) tcats , absyn amod havePos NamedArg dats , definedRules havePos cf -- , allTokenCats printToken tcats -- seem to be included in printerCats , printers amod printerCats , empty -- Make sure we terminate the file with a new line. ] where -- The grammar categories (excluding list, coerce, and token categories): dats :: [Data] dats = cf2data cf -- getAbstractSyntax also includes list categories, which isn't what we need -- The user-defined token categories (including Ident). tcats :: [(TokenCat, Bool)] tcats = (if hasIdent cf then ((catIdent, False) :) else id) [ (wpThing name, b) | TokenReg name b _ <- cfgPragmas cf ] -- Bind printers for the following categories (involves lists and literals). printerCats :: [Cat] printerCats = concat [ map fst (getAbstractSyntax cf) , map TokenCat $ List.nub $ cfgLiterals cf ++ map fst tcats ] usesString = "String" `elem` cfgLiterals cf usesPos = havePos || hasPositionTokens cf defineIntAndPair = vsep [ vcat $ concat [ [ "postulate" ] , map (nest 2 . text) $ table " " [ [ intT, ":", "Set" ] , [ intToNatT, ":", intT, uArrow , natT ] , [ natToIntT, ":", natT, uArrow , intT ] ] ] , vcat $ map (\ s -> hsep [ "{-#", "COMPILE", "GHC", text s, "#-}" ]) $ table " = " [ [ intT, "type Prelude.Int" ] , [ intToNatT, "Prelude.toInteger" ] , [ natToIntT, "Prelude.fromInteger" ] ] , vcat [ "data #Pair (A B : Set) : Set where" , nest 2 $ "#pair : A → B → #Pair A B" ] , "{-# COMPILE GHC #Pair = data (,) ((,)) #-}" ] defineBNFCPosition = hsep [ posT, equals, maybeT, parens intPairT ] -- | Generate parser bindings for Agda. -- cf2AgdaParser :: String -- ^ Current time. -> TokenText -> String -- ^ Module name. -> String -- ^ Agda AST module name. -> String -- ^ Haskell ErrM module name. -> String -- ^ Haskell Par module name. -> Maybe String -- ^ Does the grammar use layout? If yes, Haskell Layout module name. -> [Cat] -- ^ Bind parsers for these non-terminals. -> Doc cf2AgdaParser time tokenText mod astmod emod pmod layoutMod cats = vsep $ [ preamble time "parsers" , hsep [ "module", text mod, "where" ] , imports NoImportNumeric (isJust layoutMod) False False , importCats astmod (List.nub cs) , importPragmas tokenText False $ [ qual emod, pmod] ++ maybeToList (qual <$> layoutMod) , "-- Error monad of BNFC" , prErrM emod , "-- Happy parsers" , parsers tokenText layoutMod cats , empty -- Make sure we terminate the file with a new line. ] where cs :: [String] cs = mapMaybe baseCat cats baseCat :: Cat -> Maybe String baseCat = \case Cat s -> Just s CoercCat s _ -> Just s TokenCat "Char" -> Nothing TokenCat s -> Just s ListCat c -> baseCat c qual m = "qualified " ++ m -- We prefix the Agda types with "#" to not conflict with user-provided nonterminals. uArrow, charT, integerT, doubleT, boolT, listT, maybeT, nothingT, justT, intT, natT, intToNatT, natToIntT, pairT, posT, stringT, stringFromListT :: IsString a => a uArrow = "→" charT = "Char" -- This is the BNFC name for token type Char! integerT = "Integer" -- This is the BNFC name for token type Integer! doubleT = "Double" -- This is the BNFC name for token type Double! boolT = "#Bool" listT = "#List" maybeT = "#Maybe" nothingT = "#nothing" justT = "#just" intT = "#Int" -- Int is the type used by the Haskell backend for line and column positions. natT = "#Nat" intToNatT = "#intToNat" natToIntT = "#natToInt" pairT = "#Pair" posT = "#BNFCPosition" stringT = "#String" stringFromListT = "#stringFromList" intPairT :: Doc intPairT = hsep [ pairT, intT, intT ] -- | Preamble: introductory comments. preamble :: String -- ^ Time stamp. -> String -- ^ Brief characterization of file content. -> Doc preamble _time what = vcat $ [ hcat [ "-- Agda bindings for the Haskell ", text what, "." ] -- -- Time stamp does not work well with BNFC's mkfile logic. -- , hcat [ "-- Generated by BNFC at " , text time, "." ] ] -- | Import statements. imports :: ImportNumeric -- ^ Import also numeric types? -> Bool -- ^ If have layout, import booleans. -> Bool -- ^ If have position information, import natural numbers. -> Bool -- ^ Do we need @Maybe@? -> Doc imports numeric layout pos needMaybe = vcat . map prettyImport . concat $ [ when layout [ ("Agda.Builtin.Bool", [], [("Bool", boolT)]) ] , [ ("Agda.Builtin.Char", [charT], [] ) ] , when (numeric == YesImportNumeric) importNumeric , [ ("Agda.Builtin.List", ["[]", "_∷_"], [("List", listT)]) ] , when needMaybe [ ("Agda.Builtin.Maybe", [], [("Maybe", maybeT), ("nothing", nothingT), ("just", justT)]) ] , when pos [ ("Agda.Builtin.Nat", [], [("Nat" , natT )]) ] , [ ("Agda.Builtin.String", [], [("String", stringT), ("primStringFromList", stringFromListT) ]) ] ] where importNumeric :: [(String, [Doc], [(String, Doc)])] importNumeric = [ ("Agda.Builtin.Float public", [], [("Float", doubleT)]) , ("Agda.Builtin.Int public", [], [("Int", integerT)]) , ("Agda.Builtin.Int" , [], [("pos", "#pos")]) ] prettyImport :: (String, [Doc], [(String, Doc)]) -> Doc prettyImport (m, use, ren) | null ren = pre | otherwise = prettyList 2 pre lparen rparen semi $ map (\ (x, d) -> hsep [text x, "to", d ]) ren where pre = hsep $ concat [ [ "open", "import", text m ] , [ "using", parens $ hcat $ punctuate "; " use ] , [ "renaming" | not (null ren) ] ] -- | Import Agda AST. -- importCats :: String -- ^ Module for Agda AST. -> [String] -- ^ Agda data types to import. -> Doc importCats m cs = prettyList 2 pre lparen rparen semi $ map text cs where pre = hsep [ "open", "import", text m, "using" ] -- | Import pragmas. -- -- >>> importPragmas ByteStringToken False ["qualified Foo.Abs", "Foo.Print (printTree)", "qualified Foo.Layout"] -- {-# FOREIGN GHC import Prelude (Bool, Char, Double, Integer, String, (.)) #-} -- {-# FOREIGN GHC import qualified Data.ByteString.Char8 as BS #-} -- {-# FOREIGN GHC import qualified Data.Text #-} -- {-# FOREIGN GHC import qualified Foo.Abs #-} -- {-# FOREIGN GHC import Foo.Print (printTree) #-} -- {-# FOREIGN GHC import qualified Foo.Layout #-} -- importPragmas :: TokenText -> Bool -- ^ Do we use position information? -> [String] -- ^ Haskell modules to import. -> Doc importPragmas tokenText pos mods = vcat $ map imp $ base ++ mods where imp s = hsep [ "{-#", "FOREIGN", "GHC", "import", text s, "#-}" ] base = concat [ [ "Prelude (" ++ preludeImports ++ ")" ] , when pos [ "qualified Prelude" ] , case tokenText of TextToken -> [] StringToken -> [] ByteStringToken -> [ "qualified Data.ByteString.Char8 as BS" ] , [ "qualified Data.Text" ] ] preludeImports = List.intercalate ", " $ concat [ [ "Bool", "Char", "Double", "Integer", "String", "(.)" ] , when pos [ "error" ] -- used unqualified by the GHC backend for postulates ] -- * Bindings for the AST. -- | Pretty-print types for token types similar to @Ident@. prToken :: ModuleName -> TokenText -> String -> Bool -> Doc prToken amod tokenText t pos = vsep [ if pos then vcat -- can't use prettyData as it excepts a Cat for the type [ hsep [ "data", text t, ":", "Set", "where" ] , nest 2 $ hsep [ text $ agdaLower t , ":" , pairT , parens intPairT , prettyCat typ , uArrow, text t ] ] else prettyData UnnamedArg t [(agdaLower t, [ typ ])] , pragmaData amod t t [(t, [])] ] where typ = case tokenText of TextToken -> Cat "#String" ByteStringToken -> Cat "#String" StringToken -> ListCat (Cat "Char") -- | Pretty-print abstract syntax definition in Agda syntax. -- -- We print this as one big mutual block rather than doing a -- strongly-connected component analysis and topological -- sort by dependency order. -- absyn :: ModuleName -> Bool -> ConstructorStyle -> [Data] -> Doc absyn _amod _havePos _style [] = empty absyn amod havePos style ds = vsep . ("mutual" :) . concatMap (map (nest 2) . prData amod havePos style) $ ds -- | Pretty-print Agda data types and pragmas for AST. -- -- >>> vsep $ prData "Foo" False UnnamedArg (Cat "Nat", [ ("Zero", []), ("Suc", [Cat "Nat"]) ]) -- data Nat : Set where -- zero : Nat -- suc : Nat → Nat -- -- {-# COMPILE GHC Nat = data Foo.Nat -- ( Foo.Zero -- | Foo.Suc -- ) #-} -- -- >>> vsep $ prData "Foo" True UnnamedArg (Cat "Nat", [ ("Zero", []), ("Suc", [Cat "Nat"]) ]) -- Nat = Nat' #BNFCPosition -- -- data Nat' Pos# : Set where -- zero : Pos# → Nat' Pos# -- suc : Pos# → Nat' Pos# → Nat' Pos# -- -- {-# COMPILE GHC Nat' = data Foo.Nat' -- ( Foo.Zero -- | Foo.Suc -- ) #-} -- -- >>> vsep $ prData "Bar" False UnnamedArg (Cat "C", [ ("C1", []), ("C2", [Cat "C"]) ]) -- data C : Set where -- c1 : C -- c2 : C → C -- -- {-# COMPILE GHC C = data Bar.C -- ( Bar.C1 -- | Bar.C2 -- ) #-} -- -- We return a list of 'Doc' rather than a single 'Doc' since want -- to intersperse empty lines and indent it later. -- If we intersperse the empty line(s) here to get a single 'Doc', -- we will produce whitespace lines after applying 'nest'. -- This is a bit of a design problem of the pretty print library: -- there is no native concept of a blank line; @text ""@ is a bad hack. -- prData :: ModuleName -> Bool -> ConstructorStyle -> Data -> [Doc] prData amod True style (Cat d, cs) = concat [ [ hsep [ text d, equals, text (sanitize primed), posT ] ] , prData' amod style (addP d) primed cs' ] where -- Replace _ by - in Agda names to avoid illegal names like Foo_'. sanitize = replace '_' '-' primed = d ++ "'" param = "Pos#" addP c = concat [sanitize c, "' ", param] cs' = map (second $ \ cats -> Cat param : map addParam cats) cs addParam :: Cat -> Cat addParam = \case Cat c -> Cat $ addP c ListCat c -> ListCat $ addParam c c -> c prData amod False style (Cat d, cs) = prData' amod style d d cs prData _ _ _ (c , _ ) = error $ "prData: unexpected category " ++ prettyShow c -- | Pretty-print Agda data types and pragmas. -- -- >>> vsep $ prData' "ErrM" UnnamedArg "Err A" "Err_" [ ("Ok", [Cat "A"]), ("Bad", [ListCat $ Cat "Char"]) ] -- data Err A : Set where -- ok : A → Err A -- bad : #List Char → Err A -- -- {-# COMPILE GHC Err = data ErrM.Err_ -- ( ErrM.Ok -- | ErrM.Bad -- ) #-} -- prData' :: ModuleName -> ConstructorStyle -> String -> String -> [(Fun, [Cat])] -> [Doc] prData' amod style d haskellDataName cs = [ prettyData style d cs , pragmaData amod (head $ words d) haskellDataName cs ] -- | Pretty-print Agda binding for the BNFC Err monad. -- -- Note: we use "Err" here since a category "Err" would also conflict -- with BNFC's error monad in the Haskell backend. prErrM :: ModuleName -> Doc prErrM emod = vsep $ prData' emod UnnamedArg "Err A" "Err" [ ("Ok" , [Cat "A"]) , ("Bad", [ListCat $ Cat "Char"]) ] -- | Pretty-print AST definition in Agda syntax. -- -- >>> prettyData UnnamedArg "Nat" [ ("zero", []), ("suc", [Cat "Nat"]) ] -- data Nat : Set where -- zero : Nat -- suc : Nat → Nat -- -- >>> prettyData UnnamedArg "C" [ ("C1", []), ("C2", [Cat "C"]) ] -- data C : Set where -- c1 : C -- c2 : C → C -- -- >>> :{ -- prettyData UnnamedArg "Stm" -- [ ("block", [ListCat $ Cat "Stm"]) -- , ("while", [Cat "Exp", Cat "Stm"]) -- ] -- :} -- data Stm : Set where -- block : #List Stm → Stm -- while : Exp → Stm → Stm -- -- >>> :{ -- prettyData NamedArg "Stm" -- [ ("block", [ListCat $ Cat "Stm"]) -- , ("if", [Cat "Exp", Cat "Stm", Cat "Stm"]) -- ] -- :} -- data Stm : Set where -- block : (ss : #List Stm) → Stm -- if : (e : Exp) (s₁ s₂ : Stm) → Stm -- prettyData :: ConstructorStyle -> String -> [(Fun, [Cat])] -> Doc prettyData style d cs = vcat $ concat [ [ hsep [ "data", text d, colon, "Set", "where" ] ] , mkTSTable $ map (prettyConstructor style d) cs ] where mkTSTable :: [(Doc,Doc)] -> [Doc] mkTSTable = map (nest 2 . text) . table " : " . map mkRow where mkRow (c,t) = [ render c, render t ] -- | Generate pragmas to bind Haskell AST to Agda. -- -- >>> pragmaData "Foo" "Empty" "Bar" [] -- {-# COMPILE GHC Empty = data Foo.Bar () #-} -- -- >>> pragmaData "Foo" "Nat" "Natty" [ ("zero", []), ("suc", [Cat "Nat"]) ] -- {-# COMPILE GHC Nat = data Foo.Natty -- ( Foo.zero -- | Foo.suc -- ) #-} -- pragmaData :: ModuleName -> String -> String -> [(Fun, [Cat])] -> Doc pragmaData amod d haskellDataName cs = prettyList 2 pre lparen (rparen <+> "#-}") "|" $ map (prettyFun amod . fst) cs where pre = hsep [ "{-#", "COMPILE", "GHC", text d, equals, "data" , text $ concat [ amod, ".", haskellDataName ] ] -- | Pretty-print since rule as Agda constructor declaration. -- -- >>> prettyConstructor UnnamedArg "D" ("c", [Cat "A", Cat "B", Cat "C"]) -- (c,A → B → C → D) -- >>> prettyConstructor undefined "D" ("c1", []) -- (c1,D) -- >>> prettyConstructor NamedArg "Stm" ("SIf", map Cat ["Exp", "Stm", "Stm"]) -- (sIf,(e : Exp) (s₁ s₂ : Stm) → Stm) -- prettyConstructor :: ConstructorStyle -> String -> (Fun,[Cat]) -> (Doc,Doc) prettyConstructor style d (c, as) = (prettyCon c,) $ if null as then text d else hsep $ [ prettyConstructorArgs style as , uArrow , text d ] -- | Print the constructor argument telescope. -- -- >>> prettyConstructorArgs UnnamedArg [Cat "A", Cat "B", Cat "C"] -- A → B → C -- -- >>> prettyConstructorArgs NamedArg (map Cat ["Exp", "Stm", "Stm"]) -- (e : Exp) (s₁ s₂ : Stm) -- prettyConstructorArgs :: ConstructorStyle -> [Cat] -> Doc prettyConstructorArgs style as = case style of UnnamedArg -> hsep $ List.intersperse uArrow ts NamedArg -> hsep $ map (\ (x :| xs, t) -> parens (hsep [x, hsep xs, colon, t])) tel where ts = map prettyCat as ns = map (text . subscript) $ numberUniquely $ map nameSuggestion as tel = aggregateOn (render . snd) $ zip ns ts deltaSubscript = ord '₀' - ord '0' -- exploiting that '0' comes before '₀' in character table subscript (m, s) = maybe s (\ n -> s ++ map (chr . (deltaSubscript +) . ord) (show n)) m -- Aggregate consecutive arguments of the same type. aggregateOn :: Eq c => ((a,b) -> c) -> [(a,b)] -> [(List1 a,b)] aggregateOn f = map (\ p -> (List1.map fst p, snd (List1.head p))) . List1.groupBy ((==) `on` f) -- . List1.groupWith f -- Too recent, fails stack-7.8 install -- | Suggest the name of a bound variable of the given category. -- -- >>> map nameSuggestion [ ListCat (Cat "Stm"), TokenCat "Var", Cat "Exp" ] -- ["ss","x","e"] -- nameSuggestion :: Cat -> String nameSuggestion = \case ListCat c -> nameSuggestion c ++ "s" CoercCat d _ -> nameFor d Cat d -> nameFor d TokenCat{} -> "x" -- | Suggest the name of a bound variable of the given base category. -- -- >>> map nameFor ["Stm","ABC","#String"] -- ["s","a","s"] -- nameFor :: String -> String nameFor d = [ toLower $ head $ dropWhile (== '#') d ] -- | Number duplicate elements in a list consecutively, starting with 1. -- -- >>> numberUniquely ["a", "b", "a", "a", "c", "b"] -- [(Just 1,"a"),(Just 1,"b"),(Just 2,"a"),(Just 3,"a"),(Nothing,"c"),(Just 2,"b")] -- numberUniquely :: forall a. Ord a => [a] -> [(Maybe Int, a)] numberUniquely as = mapM step as `evalState` Map.empty where -- First pass: determine frequency of each element. counts :: Frequency a counts = foldl (flip incr) Map.empty as -- Second pass: consecutively number elements with frequency > 1. step :: a -> State (Frequency a) (Maybe Int, a) step a = do -- If the element has a unique occurrence, we do not need to number it. let n = Map.findWithDefault (error "numberUniquelyWith") a counts if n == 1 then return (Nothing, a) else do -- Otherwise, increase the counter for that element and number it -- with the new value. modify $ incr a (,a) . Map.lookup a <$> get -- | A frequency map. -- -- NB: this type synonym should be local to 'numberUniquely', but -- Haskell lacks local type synonyms. -- https://gitlab.haskell.org/ghc/ghc/issues/4020 type Frequency a = Map a Int -- | Increase the frequency of the given key. incr :: Ord a => a -> Frequency a -> Frequency a incr = Map.alter $ maybe (Just 1) (Just . succ) -- * Generate the defined constructors. agdaDefCfg :: DefCfg agdaDefCfg = DefCfg { sanitizeName = agdaLower , hasType = ":" , arrow = uArrow , lambda = "λ" , cons = "_∷_" , convTok = agdaLower , convLitInt = \ e -> App "#pos" dummyType [e] , polymorphism = (BaseT "{a : Set}" :) } -- | Generate Haskell code for the @define@d constructors. definedRules :: Bool -> CF -> Doc definedRules havePos = vsep . definedRules' agdaDefCfg havePos -- * Generate bindings for the pretty printers -- UNUSED -- -- | Generate Agda code to print tokens. -- -- -- -- >>> printToken "Ident" -- -- printIdent : Ident → #String -- -- printIdent (ident s) = #stringFromList s -- -- -- printToken :: String -> Doc -- printToken t = vcat -- [ hsep [ f, colon, text t, uArrow, stringT ] -- , hsep [ f, lparen <> c <+> "s" <> rparen, equals, stringFromListT, "s" ] -- ] -- where -- f = text $ "print" ++ t -- c = text $ agdaLower t -- | Generate Agda bindings to printers for AST. -- -- >>> printers "Foo" $ map Cat [ "Exp", "Stm" ] -- -- Binding the pretty printers. -- -- postulate -- printExp : Exp → #String -- printStm : Stm → #String -- -- {-# COMPILE GHC printExp = \ e -> Data.Text.pack (printTree (e :: Foo.Exp)) #-} -- {-# COMPILE GHC printStm = \ s -> Data.Text.pack (printTree (s :: Foo.Stm)) #-} -- printers :: ModuleName -> [Cat] -> Doc printers _amod [] = empty printers amod cats = vsep [ "-- Binding the pretty printers." , vcat $ "postulate" : mkTSTable (map (prettyTySig) cats) , vcat $ map pragmaBind cats ] where prettyTySig c = (agdaPrinterName c, hsep [ prettyCat c, uArrow, stringT ]) pragmaBind c = hsep [ "{-#", "COMPILE", "GHC", agdaPrinterName c, equals, "\\", y, "->" , "Data.Text.pack", parens ("printTree" <+> parens (y <+> "::" <+> t)), "#-}" ] where y = text $ nameSuggestion c t = catToType ((text amod <> text ".") <>) empty c -- Removes CoercCat. -- | Bind happy parsers. -- -- >>> parsers StringToken Nothing [ListCat (CoercCat "Exp" 2)] -- postulate -- parseListExp2 : #String → Err (#List Exp) -- -- {-# COMPILE GHC parseListExp2 = pListExp2 . myLexer . Data.Text.unpack #-} -- -- >>> parsers TextToken Nothing [ListCat (CoercCat "Exp" 2)] -- postulate -- parseListExp2 : #String → Err (#List Exp) -- -- {-# COMPILE GHC parseListExp2 = pListExp2 . myLexer #-} -- parsers :: TokenText -> Maybe String -- ^ Grammar uses layout? If yes, Haskell layout module name. -> [Cat] -- ^ Bind parsers for these non-terminals. -> Doc parsers tokenText layoutMod cats = vcat ("postulate" : map (nest 2 . prettyTySig) cats) $++$ vcat (map pragmaBind cats) where -- When grammar uses layout, we parametrize the parser by a boolean @tl@ -- that indicates whether top layout should be used for this parser. -- Also, we add @resolveLayout tl@ to the pipeline after lexing. prettyTySig :: Cat -> Doc prettyTySig c = hsep . concat $ [ [ agdaParserName c, colon ] , when layout [ boolT, uArrow ] , [ stringT, uArrow, "Err", prettyCatParens c ] ] pragmaBind :: Cat -> Doc pragmaBind c = hsep . concat $ [ [ "{-#", "COMPILE", "GHC", agdaParserName c, equals ] , when layout [ "\\", "tl", "->" ] , [ parserName c, "." ] , when layout [ hcat [ text lmod, ".", "resolveLayout" ], "tl", "." ] , [ "myLexer" ] , case tokenText of -- Agda's String is Haskell's Data.Text TextToken -> [] StringToken -> [ ".", "Data.Text.unpack" ] ByteStringToken -> [ ".", "BS.pack", ".", "Data.Text.unpack" ] , [ "#-}" ] ] layout :: Bool layout = isJust layoutMod lmod :: String lmod = fromJust layoutMod -- * Auxiliary functions -- UNUSED -- -- | Concatenate documents created from token categories, -- -- separated by blank lines. -- -- -- -- >>> allTokenCats text ["T", "U"] -- -- T -- -- -- -- U -- allTokenCats :: (TokenCat -> Doc) -> [TokenCat] -> Doc -- allTokenCats f = vsep . map f -- | Pretty-print a rule name for Haskell. prettyFun :: ModuleName -> Fun -> Doc prettyFun amod c = text $ concat [ amod, ".", c ] -- | Pretty-print a rule name for Agda. prettyCon :: Fun -> Doc prettyCon = text . agdaLower -- | Turn identifier to non-capital identifier. -- Needed, since in Agda a constructor cannot overload a data type -- with the same name. -- -- >>> map agdaLower ["SFun","foo","ABC","HelloWorld","module","Type_int","C1"] -- ["sFun","foo","aBC","helloWorld","module'","type-int","c1"] -- agdaLower :: String -> String agdaLower = avoidKeywords . updateHead toLower . replace '_' '-' -- WAS: replace '_' '\'' . BNFC.Utils.mkName agdaKeywords BNFC.Utils.MixedCase where updateHead _f [] = [] updateHead f (x:xs) = f x : xs avoidKeywords s | s `Set.member` agdaKeywords = s ++ "\'" | otherwise = s -- | A list of Agda keywords that would clash with generated names. agdaKeywords :: Set String agdaKeywords = Set.fromList $ words "abstract codata coinductive constructor data do eta-equality field forall hiding import in inductive infix infixl infixr instance let macro module mutual no-eta-equality open overlap pattern postulate primitive private public quote quoteContext quoteGoal quoteTerm record renaming rewrite Set syntax tactic unquote unquoteDecl unquoteDef using variable where with" -- | Name of Agda parser binding (mentions precedence). -- -- >>> agdaParserName $ ListCat $ CoercCat "Exp" 2 -- parseListExp2 -- agdaParserName :: Cat -> Doc agdaParserName c = text $ "parse" ++ identCat c -- | Name of Agda printer binding (does not mention precedence). -- -- >>> agdaPrinterName $ ListCat $ CoercCat "Exp" 2 -- printListExp -- agdaPrinterName :: Cat -> Doc agdaPrinterName c = text $ "print" ++ identCat (normCat c) -- | Pretty-print a category as Agda type. -- Ignores precedence. -- -- >>> prettyCat $ ListCat (CoercCat "Exp" 2) -- #List Exp -- prettyCat :: Cat -> Doc prettyCat = \case Cat s -> text s TokenCat s -> text s CoercCat s _ -> text s ListCat c -> listT <+> prettyCatParens c -- | Pretty-print category in parentheses, if 'compositeCat'. prettyCatParens :: Cat -> Doc prettyCatParens c = parensIf (compositeCat c) (prettyCat c) -- | Is the Agda type corresponding to 'Cat' composite (or atomic)? compositeCat :: Cat -> Bool compositeCat = \case Cat c -> any isSpace c ListCat{} -> True _ -> False -- * Agda stub to test parser -- | Write a simple IO library with fixed contents. agdaLibContents :: String -- ^ Name of Agda library module. -> Doc -- ^ Contents of Agda library module. agdaLibContents mod = vcat [ "-- Basic I/O library." , "" , "module" <+> text mod <+> "where" , "" , "open import Agda.Builtin.IO public using (IO)" , "open import Agda.Builtin.List public using (List; []; _∷_)" , "open import Agda.Builtin.String public using (String)" , " renaming (primStringFromList to stringFromList)" , "open import Agda.Builtin.Unit public using (⊤)" , "" , "-- I/O monad." , "" , "postulate" , " return : ∀ {a} {A : Set a} → A → IO A" , " _>>=_ : ∀ {a b} {A : Set a} {B : Set b} → IO A → (A → IO B) → IO B" , "" , "{-# COMPILE GHC return = \\ _ _ -> return #-}" , "{-# COMPILE GHC _>>=_ = \\ _ _ _ _ -> (>>=) #-}" , "" , "infixl 1 _>>=_ _>>_" , "" , "_>>_ : ∀ {b} {B : Set b} → IO ⊤ → IO B → IO B" , "_>>_ = λ m m' → m >>= λ _ → m'" , "" , "-- Co-bind and functoriality." , "" , "infixr 1 _=<<_ _<$>_" , "" , "_=<<_ : ∀ {a b} {A : Set a} {B : Set b} → (A → IO B) → IO A → IO B" , "k =<< m = m >>= k" , "" , "_<$>_ : ∀ {a b} {A : Set a} {B : Set b} → (A → B) → IO A → IO B" , "f <$> m = do" , " a ← m" , " return (f a)" , "" , "-- Binding basic I/O functionality." , "" , "{-# FOREIGN GHC import qualified Data.Text #-}" , "{-# FOREIGN GHC import qualified Data.Text.IO #-}" , "{-# FOREIGN GHC import qualified System.Exit #-}" , "{-# FOREIGN GHC import qualified System.Environment #-}" , "{-# FOREIGN GHC import qualified System.IO #-}" , "" , "postulate" , " exitFailure : ∀{a} {A : Set a} → IO A" , " getArgs : IO (List String)" , " putStrLn : String → IO ⊤" , " readFiniteFile : String → IO String" , "" , "{-# COMPILE GHC exitFailure = \\ _ _ -> System.Exit.exitFailure #-}" , "{-# COMPILE GHC getArgs = fmap (map Data.Text.pack) System.Environment.getArgs #-}" , "{-# COMPILE GHC putStrLn = System.IO.putStrLn . Data.Text.unpack #-}" , "{-# COMPILE GHC readFiniteFile = Data.Text.IO.readFile . Data.Text.unpack #-}" ] agdaMainContents :: String -- ^ Name of Agda main module. -> String -- ^ Name of Agda library module. -> String -- ^ Name of Agda AST module. -> String -- ^ Name of Agda parser module. -> Bool -- ^ Is the grammar using layout? -> Cat -- ^ Category to parse. -> Doc -- ^ Contents of Agda main module. agdaMainContents mod lmod amod pmod layout c = vcat [ "-- Test for Agda binding of parser. Requires Agda >= 2.5.4." , "" , "module" <+> text mod <+> "where" , when layout "\nopen import Agda.Builtin.Bool using (true)" , "open import" <+> text lmod , "open import" <+> text amod <+> "using" <+> parens (printer) , "open import" <+> text pmod <+> "using" <+> parens ("Err;" <+> parser) , "" , "main : IO ⊤" , "main = do" , " file ∷ [] ← getArgs where" , " _ → do" , " putStrLn \"usage: Main \"" , " exitFailure" , " Err.ok result ←" <+> parseFun <+> "<$> readFiniteFile file where" , " Err.bad msg → do" , " putStrLn \"PARSE FAILED\\n\"" , " putStrLn (stringFromList msg)" , " exitFailure" , " putStrLn \"PARSE SUCCESSFUL\\n\"" , " putStrLn" <+> parens (printer <+> "result") ] where printer = agdaPrinterName c parser = agdaParserName c parseFun = hsep . concat $ [ [parser], when layout ["true"] ] -- Permit use of top-level layout, if any. BNFC-2.9.5/src/BNFC/Backend/Base.hs0000644000000000000000000001056607346545000014474 0ustar0000000000000000{-# LANGUAGE FlexibleInstances #-} -- | Backend base module. -- -- Defines the type of the backend and some useful functions. module BNFC.Backend.Base ( Backend , MkFiles , GeneratedFile(..) , MakeComment , execBackend , mkfile , liftIO , writeFiles ) where import Control.Arrow ( (&&&) ) import Control.Monad.IO.Class ( liftIO ) import Control.Monad.Writer ( WriterT, execWriterT, tell ) import Data.Char ( isSpace ) import Data.Foldable ( forM_ ) import Data.Function ( on ) import qualified Data.List as List import System.Directory ( createDirectoryIfMissing ) import System.FilePath ( dropFileName, takeExtension, () ) import BNFC.Options ( versionString ) import BNFC.PrettyPrint import BNFC.Utils ( writeFileRep ) -- | Stamp BNFC puts in the header of each generated file. msgGenerated :: String msgGenerated = "File generated by the BNF Converter (bnfc " ++ versionString ++ ")." -- | Define the type of the backend functions. For more purity, instead of -- having each backend writing the generated files to disk, they return a list -- of pairs containing the (relative) file path and the file content. This -- allow for 1) easier testing, 2) implement common options like changing the -- output dir or providing a diff instead of overwritting the files on a -- highter level and 3) more purity. -- -- The writer monad provides a more convenient API to generate the list. Note -- that we still use the `IO` monad for now because some backends insist on -- printing stuff to the screen while generating the files. type MkFiles a = WriterT [GeneratedFile] IO a type Backend = MkFiles () -- | A result file of a backend. data GeneratedFile = GeneratedFile { fileName :: FilePath -- ^ Name of the file to write. , makeComment :: MakeComment -- ^ Function to generate a comment. -- Used to prefix the file with a stamp ("Generated by BNFC"). , fileContent :: String -- ^ Content of the file to write. } -- quick-and-dirty instances for HSpec test-suite instance Show GeneratedFile where show (GeneratedFile x _ y) = unwords [ "GeneratedFile", show x, "_", show y ] instance Eq GeneratedFile where (==) = (==) `on` fileName &&& fileContent -- | Type of comment-generating functions. type MakeComment = String -> String -- | Named after execWriter, this function execute the given backend -- and returns the generated file paths and contents. execBackend :: MkFiles () -> IO [GeneratedFile] execBackend = execWriterT -- | A specialized version of `tell` that adds a file and its content to the -- list of generated files. mkfile :: FileContent c => FilePath -> MakeComment -> c -> MkFiles () mkfile path f content = tell [GeneratedFile path f (fileContentToString content)] -- | While we are moving to generating `Text.PrettyPrint.Doc` instead of `String`, -- it is nice to be able to use both as argument to 'mkfile'. -- So we do some typeclass magic. class FileContent a where fileContentToString :: a -> String instance FileContent [Char] where fileContentToString = deleteTrailingWhiteSpace instance FileContent Doc where fileContentToString = deleteTrailingWhiteSpace . render deleteTrailingWhiteSpace :: String -> String deleteTrailingWhiteSpace = unlines . map (List.dropWhileEnd isSpace) . lines -- | Write a set of files to disk. the first argument is the root directory -- inside which all the generated files will be written. This root directory -- and sub-directories will be created as needed (ex: if the files contains a -- @a\/b\/file.txt@, `writeFiles` will create the directories @$ROOT\/a@ and -- @$ROOT\/a\/b@) writeFiles :: FilePath -> MkFiles () -> IO () writeFiles root fw = do -- First we check that the directory exists. fb <- execBackend fw createDirectoryIfMissing True root forM_ fb $ \ (GeneratedFile path mkComment content) -> do createDirectoryIfMissing True (root dropFileName path) -- Then we write the files, adding the BNFC stamp as header. writeFileRep (root path) $ -- TODO: the following is a hack, make this more systematic: if takeExtension path == ".txt" then -- Sign at the end since e.g. txt2tags cannot handle comments at beginning of file. unlines [ content, mkComment msgGenerated ] else -- Sign at the beginning (JFlex cannot handle comments in general, only at beginning). mkComment msgGenerated ++ "\n\n" ++ content BNFC-2.9.5/src/BNFC/Backend/C.hs0000644000000000000000000003430407346545000014000 0ustar0000000000000000{-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} {- BNF Converter: C Main file Copyright (C) 2004 Author: Michael Pellauer Copyright (C) 2020 Andreas Abel -} module BNFC.Backend.C (makeC, bufferC, bufferH, comment, testfileHeader) where import Prelude hiding ((<>)) import Data.Foldable (toList) import qualified Data.Map as Map import BNFC.Utils import BNFC.CF import BNFC.Options import BNFC.Backend.Base import BNFC.Backend.C.CFtoCAbs import BNFC.Backend.C.CFtoFlexC import BNFC.Backend.C.CFtoBisonC import BNFC.Backend.C.CFtoCSkel import BNFC.Backend.C.CFtoCPrinter import BNFC.PrettyPrint import qualified BNFC.Backend.Common.Makefile as Makefile makeC :: SharedOptions -> CF -> MkFiles () makeC opts cf = do let (hfile, cfile) = cf2CAbs (linenumbers opts) prefix cf mkCFile "Absyn.h" hfile mkCFile "Absyn.c" cfile mkCFile "Buffer.h" bufferH mkCFile "Buffer.c" $ bufferC "Buffer.h" let (flex, env) = cf2flex parserMode cf mkfile (name ++ ".l") commentWithEmacsModeHint flex mkfile (name ++ ".y") commentWithEmacsModeHint $ cf2Bison (linenumbers opts) parserMode cf env mkCFile "Parser.h" $ mkHeaderFile (linenumbers opts) cf (Map.elems env) let (skelH, skelC) = cf2CSkel cf mkCFile "Skeleton.h" skelH mkCFile "Skeleton.c" skelC let (prinH, prinC) = cf2CPrinter cf mkCFile "Printer.h" prinH mkCFile "Printer.c" prinC mkCFile "Test.c" (ctest cf) Makefile.mkMakefile (optMake opts) $ makefile name prefix where name :: String name = lang opts -- The prefix is a string used by flex and bison -- that is prepended to generated function names. -- It should be a valid C identifier. prefix :: String prefix = snakeCase_ name ++ "_" parserMode :: ParserMode parserMode = CParser False prefix mkCFile x = mkfile x comment makefile :: String -> String -> String -> Doc makefile name prefix basename = vcat [ "CC = gcc -g" , "CCFLAGS = --ansi -W -Wall -Wsign-conversion -Wno-unused-parameter -Wno-unused-function -Wno-unneeded-internal-declaration ${CC_OPTS}" -- The @#define _POSIX_C_SOURCE 200809L@ is now placed locally in -- the generated lexer. -- , "CCFLAGS = --ansi -W -Wall -Wno-unused-parameter -Wno-unused-function -Wno-unneeded-internal-declaration -D_POSIX_C_SOURCE=200809L ${CC_OPTS}" -- , "# Setting _POSIX_C_SOURCE to 200809L activates strdup in string.h." -- , "# strdup was not in the ISO C standard before 6/2019 (C2x), yet in POSIX 1003.1." -- , "# See https://en.cppreference.com/w/c/experimental/dynamic/strdup" , "" , "FLEX = flex" , "FLEX_OPTS = -P" <> text prefix , "" , "BISON = bison" , "BISON_OPTS = -t -p" <> text prefix , "" , "OBJS = Absyn.o Buffer.o Lexer.o Parser.o Printer.o" , "" , Makefile.mkRule ".PHONY" ["clean", "distclean"] [] , Makefile.mkRule "all" [testName] [] , Makefile.mkRule "clean" [] -- peteg: don't nuke what we generated - move that to the "vclean" target. [ "rm -f *.o " ++ testName ++ " " ++ unwords [ name ++ e | e <- [".aux", ".log", ".pdf",".dvi", ".ps", ""]] ] , Makefile.mkRule "distclean" ["clean"] [ "rm -f " ++ unwords [ "Absyn.h", "Absyn.c" , "Bison.h" , "Buffer.h", "Buffer.c" , name ++ ".l", "Lexer.c" , name ++ ".y", "Parser.h", "Parser.c" , "Printer.c", "Printer.h" , "Skeleton.c", "Skeleton.h" , "Test.c" , basename, name ++ ".tex" ] ] , Makefile.mkRule testName ["${OBJS}", "Test.o"] [ "@echo \"Linking " ++ testName ++ "...\"" , "${CC} ${OBJS} Test.o -o " ++ testName ] , Makefile.mkRule "Absyn.o" [ "Absyn.c", "Absyn.h"] [ "${CC} ${CCFLAGS} -c Absyn.c" ] , Makefile.mkRule "Buffer.o" [ "Buffer.c", "Buffer.h"] [ "${CC} ${CCFLAGS} -c Buffer.c" ] , Makefile.mkRule "Lexer.c" [ name ++ ".l" ] [ "${FLEX} ${FLEX_OPTS} -oLexer.c " ++ name ++ ".l" ] , Makefile.mkRule "Parser.c Bison.h" [ name ++ ".y" ] [ "${BISON} ${BISON_OPTS} " ++ name ++ ".y -o Parser.c" ] , Makefile.mkRule "Lexer.o" [ "CCFLAGS+=-Wno-sign-conversion" ] [] , Makefile.mkRule "Lexer.o" [ "Lexer.c", "Bison.h" ] [ "${CC} ${CCFLAGS} -c Lexer.c " ] , Makefile.mkRule "Parser.o" ["Parser.c", "Absyn.h", "Bison.h" ] [ "${CC} ${CCFLAGS} -c Parser.c" ] , Makefile.mkRule "Printer.o" [ "Printer.c", "Printer.h", "Absyn.h" ] [ "${CC} ${CCFLAGS} -c Printer.c" ] , Makefile.mkRule "Test.o" [ "Test.c", "Parser.h", "Printer.h", "Absyn.h" ] [ "${CC} ${CCFLAGS} -c Test.c" ] ] where testName = "Test" ++ name -- | Put string into a block comment. comment :: String -> String comment x = unwords ["/*", x, "*/"] -- | C line comment including mode hint for emacs. commentWithEmacsModeHint :: String -> String commentWithEmacsModeHint = comment . ("-*- c -*- " ++) -- | A heading comment for the generated parser test. testfileHeader :: [String] testfileHeader = [ "/************************* Compiler Front-End Test *************************/" , "/* */" , "/* This test will parse a file, print the abstract syntax tree, and then */" , "/* pretty-print the result. */" , "/* */" , "/***************************************************************************/" ] -- | Generate a test program that parses stdin and prints the AST and it's -- linearization ctest :: CF -> String ctest cf = unlines $ testfileHeader ++ [ "", "#include ", "#include ", "#include ", "", "#include \"Parser.h\"", "#include \"Printer.h\"", "#include \"Absyn.h\"", "", "void usage(void) {", " printf(\"usage: Call with one of the following argument " ++ "combinations:\\n\");", " printf(\"\\t--help\\t\\tDisplay this help message.\\n\");", " printf(\"\\t(no arguments)\\tParse stdin verbosely.\\n\");", " printf(\"\\t(files)\\t\\tParse content of files verbosely.\\n\");", " printf(\"\\t-s (files)\\tSilent mode. Parse content of files " ++ "silently.\\n\");", "}", "", "int main(int argc, char ** argv)", "{", " FILE *input;", " " ++ dat ++ " parse_tree;", " int quiet = 0;", " char *filename = NULL;", "", " if (argc > 1) {", " if (strcmp(argv[1], \"-s\") == 0) {", " quiet = 1;", " if (argc > 2) {", " filename = argv[2];", " } else {", " input = stdin;", " }", " } else {", " filename = argv[1];", " }", " }", "", " if (filename) {", " input = fopen(filename, \"r\");", " if (!input) {", " usage();", " exit(1);", " }", " }", " else input = stdin;", " /* The default entry point is used. For other options see Parser.h */", " parse_tree = p" ++ def ++ "(input);", " if (parse_tree)", " {", " printf(\"\\nParse Successful!\\n\");", " if (!quiet) {", " printf(\"\\n[Abstract Syntax]\\n\");", " printf(\"%s\\n\\n\", show" ++ dat ++ "(parse_tree));", " printf(\"[Linearized Tree]\\n\");", " printf(\"%s\\n\\n\", print" ++ dat ++ "(parse_tree));", " }", " free_" ++ dat ++ "(parse_tree);", " return 0;", " }", " return 1;", "}", "" ] where cat :: Cat cat = firstEntry cf def :: String def = identCat cat dat :: String dat = identCat . normCat $ cat mkHeaderFile :: RecordPositions -> CF -> [String] -> String mkHeaderFile _ cf _env = unlines $ concat [ [ "#ifndef PARSER_HEADER_FILE" , "#define PARSER_HEADER_FILE" , "" , "#include " , "#include \"Absyn.h\"" , "" ] -- Andreas, 2021-03-24 -- Removed stuff that is now generated in Bison.h using the %defines pragma in the .y file. , concatMap mkFunc $ toList $ allEntryPoints cf , [ "" , "#endif" ] ] where -- Andreas, 2019-04-29, issue #210: generate parsers also for coercions mkFunc c = [ identCat (normCat c) ++ " p" ++ identCat c ++ "(FILE *inp);" , identCat (normCat c) ++ " ps" ++ identCat c ++ "(const char *str);" ] -- | A tiny buffer library for string buffers in the lexer. bufferH :: String bufferH = unlines [ "/* A dynamically allocated character buffer that grows as it is appended. */" , "" , "#ifndef BUFFER_HEADER" , "#define BUFFER_HEADER" , "" , "typedef struct buffer {" , " char* chars; /* Pointer to start of the buffer. */" , " unsigned int size; /* Buffer size (>= 1). */" , " unsigned int current; /* Next free character position (< size). */" , "} * Buffer;" , "" , "/* External interface. */" , "/************************************************************************/" , "" , "/* Create a new buffer of the given size. */" , "Buffer newBuffer (const unsigned int size);" , "" , "/* Deallocate the buffer. */" , "void freeBuffer (Buffer buffer);" , "" , "/* Deallocate the buffer, but return its content as string. */" , "char* releaseBuffer (Buffer buffer);" , "" , "/* Clear contents of buffer. */" , "void resetBuffer (Buffer buffer);" , "" , "/* Append string at the end of the buffer. */" , "void bufferAppendString (Buffer buffer, const char *s);" , "" , "/* Append single character at the end of the buffer. */" , "void bufferAppendChar (Buffer buffer, const char c);" , "" , "/* Give read-only access to the buffer content. */" , "const char* bufferContent (Buffer buffer);" , "" , "#endif" ] -- | A tiny buffer library for string buffers in the lexer. bufferC :: String -> String bufferC bufferH = unlines [ "/* A dynamically allocated character buffer that grows as it is appended. */" , "" , "#include /* assert */" , "#include /* free, malloc */" , "#include /* fprintf */" , "#include /* size_t, strncpy */" , "#include \"" ++ bufferH ++ "\"" , "" , "/* Internal functions. */" , "/************************************************************************/" , "" , "/* Make sure the buffer can hold `n` more characters. */" , "static void bufferAllocateChars (Buffer buffer, const unsigned int n);" , "" , "/* Increase the buffer size to the new `buffer->size`. */" , "static void resizeBuffer(Buffer buffer);" , "" , "/* External interface. */" , "/************************************************************************/" , "" , "/* Create a new buffer of the given size. */" , "" , "Buffer newBuffer (const unsigned int size) {" , "" , " /* The buffer cannot be of size 0. */" , " assert (size >= 1);" , "" , " /* Allocate and initialize a new Buffer structure. */" , " Buffer buffer = (Buffer) malloc(sizeof(struct buffer));" , " buffer->size = size;" , " buffer->current = 0;" , " buffer->chars = NULL;" , " resizeBuffer(buffer);" , " buffer->chars[0] = 0;" , " return buffer;" , "}" , "" , "/* Deallocate the buffer and its content. */" , "" , "void freeBuffer (Buffer buffer) {" , " free(buffer->chars);" , " free(buffer);" , "}" , "" , "/* Deallocate the buffer, but return its content as string. */" , "" , "char* releaseBuffer (Buffer buffer) {" , " char* content = (char*) realloc (buffer->chars, buffer->current + 1);" , " free(buffer);" , " return content;" , "}" , "" , "/* Clear contents of buffer. */" , "" , "void resetBuffer (Buffer buffer) {" , " buffer->current = 0;" , " buffer->chars[buffer->current] = 0;" , "}" , "" , "/* Append string at the end of the buffer. */" , "" , "void bufferAppendString (Buffer buffer, const char *s)" , "{" , " /* Nothing to do if s is the empty string. */" , " size_t len = strlen(s);" , " if (len) {" , "" , " /* Make sure the buffer can hold all of s. */" , " bufferAllocateChars(buffer, len);" , "" , " /* Append s at the end of the buffer, including terminating 0. */" , " strncpy(buffer->chars + buffer->current, s, len + 1);" , " buffer->current += len;" , " }" , "}" , "" , "/* Append single character at the end of the buffer. */" , "" , "void bufferAppendChar (Buffer buffer, const char c)" , "{" , " /* Make sure the buffer can hold one more character and append it. */" , " bufferAllocateChars(buffer, 1);" , " buffer->chars[buffer->current] = c;" , "" , " /* Terminate with 0. */" , " buffer->current++;" , " buffer->chars[buffer->current] = 0;" , "}" , "" , "/* Give read-only access to the buffer content." , " Does not survive the destruction of the buffer object. */" , "" , "const char* bufferContent (Buffer buffer) {" , " return buffer->chars;" , "}" , "" , "/* Internal functions. */" , "/************************************************************************/" , "" , "/* Make sure the buffer can hold `n` more characters. */" , "" , "static void bufferAllocateChars (Buffer buffer, const unsigned int n) {" , " /* 1 extra char for terminating 0. */" , " unsigned int requiredSize = buffer->current + 1 + n;" , " if (buffer->size < requiredSize)" , " {" , " do buffer->size *= 2; /* Double the buffer size */" , " while (buffer->size < requiredSize);" , " resizeBuffer(buffer);" , " }" , "}" , "" , "/* Increase the buffer size to the new `size`. */" , "" , "static void resizeBuffer(Buffer buffer)" , "{" , " /* The new size needs to be strictly greater than the currently" , " * used part, otherwise writing to position buffer->current will" , " * be out of bounds." , " */" , " assert(buffer->size > buffer->current);" , "" , " /* Resize (or, the first time allocate) the buffer. */" , " buffer->chars = (char*) realloc(buffer->chars, buffer->size);" , "" , " /* Crash if out-of-memory. */" , " if (! buffer->chars)" , " {" , " fprintf(stderr, \"Buffer.c: Error: Out of memory while attempting to grow buffer!\\n\");" , " exit(1); /* This seems to be the right exit code for out-of-memory. 137 is only when the OS kills us. */" , " }" , "}" ] BNFC-2.9.5/src/BNFC/Backend/C/0000755000000000000000000000000007346545000013440 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/C/CFtoBisonC.hs0000644000000000000000000004442507346545000015736 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE MultiWayIf #-} {-# LANGUAGE PatternGuards #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TupleSections #-} {- BNF Converter: C Bison generator Copyright (C) 2004 Author: Michael Pellauer Description : This module generates the Bison input file. Note that because of the way bison stores results the programmer can increase performance by limiting the number of entry points in their grammar. Author : Michael Pellauer Created : 6 August, 2003 -} module BNFC.Backend.C.CFtoBisonC ( cf2Bison , resultName, typeName, varName , specialToks, startSymbol , unionBuiltinTokens ) where import Prelude hiding ((<>)) import Data.Char ( toLower, isUpper ) import Data.Foldable ( toList ) import Data.List ( intercalate, nub ) import qualified Data.Map as Map import System.FilePath ( (<.>) ) import BNFC.CF import BNFC.Backend.Common.NamedVariables hiding (varName) import BNFC.Backend.C.CFtoFlexC (ParserMode(..), cParser, stlParser, parserHExt, parserName, parserPackage) import BNFC.Backend.CPP.Naming import BNFC.Backend.CPP.STL.STLUtils import BNFC.Options (RecordPositions(..), InPackage) import BNFC.PrettyPrint import BNFC.Utils ((+++), table, applyWhen, for, unless, when, whenJust) --This follows the basic structure of CFtoHappy. -- Type declarations type Rules = [(NonTerminal,[(Pattern,Action)])] type Pattern = String type Action = String type MetaVar = String --The environment comes from the CFtoFlex cf2Bison :: RecordPositions -> ParserMode -> CF -> SymMap -> String cf2Bison rp mode cf env = unlines [ header mode cf , render $ union mode $ posCats ++ allParserCatsNorm cf , "" , unionDependentCode mode , unlines $ table " " $ concat [ [ ["%token", "_ERROR_" ] ] , tokens (map fst $ tokenPragmas cf) env , specialToks cf ] , declarations mode cf , startSymbol cf , "" , "%%" , "" , prRules $ rulesForBison rp mode cf env , "%%" , "" , nsStart inPackage , entryCode mode cf , nsEnd inPackage ] where inPackage = parserPackage mode posCats | stlParser mode = map TokenCat $ positionCats cf | otherwise = [] positionCats :: CF -> [String] positionCats cf = [ wpThing name | TokenReg name True _ <- cfgPragmas cf ] header :: ParserMode -> CF -> String header mode cf = unlines $ concat [ [ "/* Parser definition to be used with Bison. */" , "" , "/* Generate header file for lexer. */" , "%defines \"" ++ ("Bison" <.> h) ++ "\"" ] , whenJust (parserPackage mode) $ \ ns -> [ "%name-prefix = \"" ++ ns ++ "\"" , " /* From Bison 2.6: %define api.prefix {" ++ ns ++ "} */" ] , [ "" , "/* Reentrant parser */" , "%pure_parser" , " /* From Bison 2.3b (2008): %define api.pure full */" -- The flag %pure_parser is deprecated with a warning since Bison 3.4, -- but older Bisons like 2.3 (2006, shipped with macOS) don't recognize -- %define api.pure full , "%lex-param { yyscan_t scanner }" , "%parse-param { yyscan_t scanner }" , "" , concat [ "/* Turn on line/column tracking in the ", name, "lloc structure: */" ] , "%locations" , "" , "/* Argument to the parser to be filled with the parsed tree. */" , "%parse-param { YYSTYPE *result }" , "" , "%{" , "/* Begin C preamble code */" , "" ] -- Andreas, 2021-08-26, issue #377: Some C++ compilers want "algorithm". -- Fixing regression introduced in 2.9.2. , when (stlParser mode) [ "#include /* for std::reverse */" -- mandatory e.g. with GNU C++ 11 ] , [ "#include " , "#include " , "#include " , "#include \"" ++ ("Absyn" <.> h) ++ "\"" , "" , "#define YYMAXDEPTH 10000000" -- default maximum stack size is 10000, but right-recursion needs O(n) stack , "" , "/* The type yyscan_t is defined by flex, but we need it in the parser already. */" , "#ifndef YY_TYPEDEF_YY_SCANNER_T" , "#define YY_TYPEDEF_YY_SCANNER_T" , "typedef void* yyscan_t;" , "#endif" , "" -- , "typedef struct " ++ name ++ "_buffer_state *YY_BUFFER_STATE;" , "typedef struct yy_buffer_state *YY_BUFFER_STATE;" , "extern YY_BUFFER_STATE " ++ name ++ "_scan_string(const char *str, yyscan_t scanner);" , "extern void " ++ name ++ "_delete_buffer(YY_BUFFER_STATE buf, yyscan_t scanner);" , "" , "extern void " ++ name ++ "lex_destroy(yyscan_t scanner);" , "extern char* " ++ name ++ "get_text(yyscan_t scanner);" , "" , "extern yyscan_t " ++ name ++ "_initialize_lexer(FILE * inp);" , "" ] , unless (stlParser mode) [ "/* List reversal functions. */" , concatMap (reverseList mode) $ filter isList $ allParserCatsNorm cf ] , [ "/* End C preamble code */" , "%}" ] ] where h = parserHExt mode name = parserName mode -- | Code that needs the @YYSTYPE@ defined by the @%union@ pragma. -- unionDependentCode :: ParserMode -> String unionDependentCode mode = unlines [ "%{" , errorHandler name , "int yyparse(yyscan_t scanner, YYSTYPE *result);" , "" , "extern int yylex(YYSTYPE *lvalp, YYLTYPE *llocp, yyscan_t scanner);" , "%}" ] where name = parserName mode errorHandler :: String -> String errorHandler name = unlines [ "void yyerror(YYLTYPE *loc, yyscan_t scanner, YYSTYPE *result, const char *msg)" , "{" , " fprintf(stderr, \"error: %d,%d: %s at %s\\n\"," , " loc->first_line, loc->first_column, msg, " ++ name ++ "get_text(scanner));" , "}" ] -- | Parser entry point code. -- entryCode :: ParserMode -> CF -> String entryCode mode cf = unlines $ map (parseMethod mode cf) eps where eps = toList (allEntryPoints cf) --This generates a parser method for each entry point. parseMethod :: ParserMode -> CF -> Cat -> String parseMethod mode cf cat = unlines $ concat [ [ unwords [ "/* Entrypoint: parse", dat, "from file. */" ] , dat ++ " p" ++ parser ++ "(FILE *inp)" ] , body False , [ "" , unwords [ "/* Entrypoint: parse", dat, "from string. */" ] , dat ++ " ps" ++ parser ++ "(const char *str)" ] , body True ] where name = parserName mode body stringParser = concat [ [ "{" , " YYSTYPE result;" , " yyscan_t scanner = " ++ name ++ "_initialize_lexer(" ++ file ++ ");" , " if (!scanner) {" , " fprintf(stderr, \"Failed to initialize lexer.\\n\");" , " return 0;" , " }" ] , [ " YY_BUFFER_STATE buf = " ++ name ++ "_scan_string(str, scanner);" | stringParser ] , [ " int error = yyparse(scanner, &result);" ] , [ " " ++ name ++ "_delete_buffer(buf, scanner);" | stringParser ] , [ " " ++ name ++ "lex_destroy(scanner);" , " if (error)" , " { /* Failure */" , " return 0;" , " }" , " else" , " { /* Success */" ] , revOpt , [ " return" +++ res ++ ";" , " }" , "}" ] ] where file | stringParser = "0" | otherwise = "inp" stl = stlParser mode ncat = normCat cat dat0 = identCat ncat dat = if cParser mode then dat0 else dat0 ++ "*" parser = identCat cat res0 = concat [ "result.", varName ncat ] -- Reversing the result isReversible = cat `elem` cfgReversibleCats cf -- C and NoSTL res | not stl, isReversible = "reverse" ++ dat0 ++ "(" ++ res0 ++ ")" | otherwise = res0 -- STL: Vectors are snoc lists revOpt = when (stl && isList cat && not isReversible) [ "std::reverse(" ++ res ++ "->begin(), " ++ res ++"->end());" ] --This method generates list reversal functions for each list type. reverseList :: ParserMode -> Cat -> String reverseList mode c0 = unlines [ c' ++ " reverse" ++ c ++ "(" ++ c' +++ "l)" , "{" , " " ++ c' +++"prev = 0;" , " " ++ c' +++"tmp = 0;" , " while (l)" , " {" , " tmp = l->" ++ v ++ ";" , " l->" ++ v +++ "= prev;" , " prev = l;" , " l = tmp;" , " }" , " return prev;" , "}" ] where c = identCat (normCat c0) c' = c ++ star v = map toLower c ++ "_" star = if cParser mode then "" else "*" -- | The union declaration is special to Bison/Yacc and gives the type of -- yylval. For efficiency, we may want to only include used categories here. -- -- >>> let foo = Cat "Foo" -- >>> union (CParser True "") [foo, ListCat foo] -- %union -- { -- int _int; -- char _char; -- double _double; -- char* _string; -- Foo* foo_; -- ListFoo* listfoo_; -- } -- -- If the given list of categories is contains coerced categories, those should -- be normalized and duplicate removed -- E.g. if there is both [Foo] and [Foo2] we should only print one pointer: -- ListFoo* listfoo_; -- -- >>> let foo2 = CoercCat "Foo" 2 -- >>> union (CppParser Nothing "") [foo, ListCat foo, foo2, ListCat foo2] -- %union -- { -- int _int; -- char _char; -- double _double; -- char* _string; -- Foo* foo_; -- ListFoo* listfoo_; -- } union :: ParserMode -> [Cat] -> Doc union mode cats = vcat [ "%union" , codeblock 2 $ map text unionBuiltinTokens ++ map mkPointer normCats ] where normCats = nub (map normCat cats) mkPointer s = scope <> text (identCat s) <> star <+> text (varName s) <> ";" scope = text $ nsScope $ parserPackage mode star = if cParser mode then empty else text "*" unionBuiltinTokens :: [String] unionBuiltinTokens = [ "int _int;" , "char _char;" , "double _double;" , "char* _string;" ] -- | @%type@ declarations for non-terminal types. declarations :: ParserMode -> CF -> String declarations mode cf = unlines $ map typeNT $ posCats ++ filter (not . null . rulesForCat cf) (allParserCats cf) -- don't define internal rules where typeNT nt = "%type <" ++ varName nt ++ "> " ++ identCat nt posCats | stlParser mode = map TokenCat $ positionCats cf | otherwise = [] --declares terminal types. -- token name "literal" -- "Syntax error messages passed to yyerror from the parser will reference the literal string instead of the token name." -- https://www.gnu.org/software/bison/manual/html_node/Token-Decl.html tokens :: [UserDef] -> SymMap -> [[String]] tokens user env = map declTok $ Map.toList env where declTok (Keyword s, r) = tok "" s r declTok (Tokentype s, r) = tok (if s `elem` user then "<_string>" else "") s r tok t s r = [ "%token" ++ t, r, " /* " ++ cStringEscape s ++ " */" ] -- | Escape characters inside a C string. cStringEscape :: String -> String cStringEscape = concatMap escChar where escChar c | c `elem` ("\"\\" :: String) = '\\':[c] | otherwise = [c] -- | Produces a table with the built-in token types. specialToks :: CF -> [[String]] specialToks cf = concat [ ifC catString [ "%token<_string>", "_STRING_" ] , ifC catChar [ "%token<_char> ", "_CHAR_" ] , ifC catInteger [ "%token<_int> ", "_INTEGER_" ] , ifC catDouble [ "%token<_double>", "_DOUBLE_" ] , ifC catIdent [ "%token<_string>", "_IDENT_" ] ] where ifC cat s = if isUsedCat cf (TokenCat cat) then [s] else [] -- | Bison only supports a single entrypoint. startSymbol :: CF -> String startSymbol cf = "%start" +++ identCat (firstEntry cf) --The following functions are a (relatively) straightforward translation --of the ones in CFtoHappy.hs rulesForBison :: RecordPositions -> ParserMode -> CF -> SymMap -> Rules rulesForBison rp mode cf env = map mkOne (ruleGroups cf) ++ posRules where mkOne (cat,rules) = constructRule rp mode cf env rules cat posRules :: Rules posRules | CppParser inPackage _ <- mode = for (positionCats cf) $ \ n -> (TokenCat n, [( Map.findWithDefault n (Tokentype n) env , addResult cf (TokenCat n) $ concat [ "$$ = new ", nsScope inPackage, n, "($1, @$.first_line);" ] )]) | otherwise = [] -- For every non-terminal, we construct a set of rules. constructRule :: RecordPositions -> ParserMode -> CF -> SymMap -> [Rule] -- ^ List of alternatives for parsing ... -> NonTerminal -- ^ ... this non-terminal. -> (NonTerminal,[(Pattern,Action)]) constructRule rp mode cf env rules nt = (nt,) $ [ (p,) $ addResult cf nt $ generateAction rp mode (identCat (normCat nt)) (funRule r) b m | r0 <- rules , let (b,r) = if isConsFun (funRule r0) && valCat r0 `elem` cfgReversibleCats cf then (True, revSepListRule r0) else (False, r0) , let (p,m) = generatePatterns mode cf env r ] -- | Add action if we parse an entrypoint non-terminal: -- Set field in result record to current parse. addResult :: CF -> NonTerminal -> Action -> Action addResult cf nt a = if nt `elem` toList (allEntryPoints cf) -- Note: Bison has only a single entrypoint, -- but BNFC works around this by adding dedicated parse methods for all entrypoints. -- Andreas, 2021-03-24: But see #350: bison still uses only the @%start@ non-terminal. then concat [ a, " result->", varName (normCat nt), " = $$;" ] else a -- | Switch between STL or not. generateAction :: IsFun a => RecordPositions -- ^ Remember position information? -> ParserMode -- ^ For C or C++? -> String -- ^ List type. -> a -- ^ Rule name. -> Bool -- ^ Reverse list? -> [(MetaVar, Bool)] -- ^ Meta-vars; should the list referenced by the var be reversed? -> Action generateAction rp = \case CppParser ns _ -> generateActionSTL rp ns CParser b _ -> \ nt f r -> generateActionC rp (not b) nt f r . map fst -- | Generates a string containing the semantic action. -- >>> generateActionC NoRecordPositions False "Foo" "Bar" False ["$1"] -- "$$ = new Bar($1);" -- >>> generateActionC NoRecordPositions True "Foo" "Bar" False ["$1"] -- "$$ = make_Bar($1);" -- >>> generateActionC NoRecordPositions True "Foo" "_" False ["$1"] -- "$$ = $1;" -- >>> generateActionC NoRecordPositions True "ListFoo" "[]" False [] -- "$$ = 0;" -- >>> generateActionC NoRecordPositions True "ListFoo" "(:[])" False ["$1"] -- "$$ = make_ListFoo($1, 0);" -- >>> generateActionC NoRecordPositions True "ListFoo" "(:)" False ["$1","$2"] -- "$$ = make_ListFoo($1, $2);" -- >>> generateActionC NoRecordPositions True "ListFoo" "(:)" True ["$1","$2"] -- "$$ = make_ListFoo($2, $1);" generateActionC :: IsFun a => RecordPositions -> Bool -> String -> a -> Bool -> [MetaVar] -> Action generateActionC rp cParser nt f b ms | isCoercion f = "$$ = " ++ unwords ms ++ ";" ++ loc | isNilFun f = "$$ = 0;" | isOneFun f = concat ["$$ = ", new nt, "(", intercalate ", " ms', ", 0);"] | isConsFun f = concat ["$$ = ", new nt, "(", intercalate ", " ms', ");"] | otherwise = concat ["$$ = ", new (funName f), "(", intercalate ", " ms', ");", loc] where ms' = if b then reverse ms else ms loc | RecordPositions <- rp = " $$->line_number = @$.first_line; $$->char_number = @$.first_column;" | otherwise = "" new :: String -> String new | cParser = ("make_" ++) | otherwise = \ s -> if isUpper (head s) then "new " ++ s else sanitizeCpp s generateActionSTL :: IsFun a => RecordPositions -> InPackage -> String -> a -> Bool -> [(MetaVar,Bool)] -> Action generateActionSTL rp inPackage nt f b mbs = reverses ++ if | isCoercion f -> concat ["$$ = ", unwords ms, ";", loc] | isNilFun f -> concat ["$$ = ", "new ", scope, nt, "();"] | isOneFun f -> concat ["$$ = ", "new ", scope, nt, "(); $$->push_back(", head ms, ");"] | isConsFun f -> concat [lst, "->push_back(", el, "); $$ = ", lst, ";"] | isDefinedRule f -> concat ["$$ = ", scope, sanitizeCpp (funName f), "(", intercalate ", " ms, ");" ] | otherwise -> concat ["$$ = ", "new ", scope, funName f, "(", intercalate ", " ms, ");", loc] where ms = map fst mbs -- The following match only happens in the cons case: [el, lst] = applyWhen b reverse ms -- b: left-recursion transformed? loc | RecordPositions <- rp = " $$->line_number = @$.first_line; $$->char_number = @$.first_column;" | otherwise = "" reverses = unwords ["std::reverse(" ++ m ++"->begin(),"++m++"->end()) ;" | (m, True) <- mbs] scope = nsScope inPackage -- Generate patterns and a set of metavariables indicating -- where in the pattern the non-terminal generatePatterns :: ParserMode -> CF -> SymMap -> Rule -> (Pattern,[(MetaVar,Bool)]) generatePatterns mode cf env r = case rhsRule r of [] -> ("/* empty */",[]) its -> (unwords (map mkIt its), metas its) where stl = stlParser mode mkIt = \case Left (TokenCat s) | stl && isPositionCat cf s -> typeName s | otherwise -> Map.findWithDefault (typeName s) (Tokentype s) env Left c -> identCat c Right s -> Map.findWithDefault s (Keyword s) env metas its = [(revIf c ('$': show i), revert c) | (i, Left c) <- zip [1 :: Int ..] its] -- C and C++/NoSTL: call reverse function revIf c m = if not stl && isntCons && elem c revs then "reverse" ++ identCat (normCat c) ++ "(" ++ m ++ ")" else m -- no reversal in the left-recursive Cons rule itself -- C++/STL: flag if reversal is necessary -- notice: reversibility with push_back vectors is the opposite -- of right-recursive lists! revert c = isntCons && isList c && notElem c revs revs = cfgReversibleCats cf isntCons = not $ isConsFun $ funRule r -- We have now constructed the patterns and actions, -- so the only thing left is to merge them into one string. prRules :: Rules -> String prRules [] = [] prRules ((_, []):rs) = prRules rs --internal rule prRules ((nt, (p,a) : ls):rs) = unwords [nt', ":" , p, "{", a, "}", '\n' : pr ls] ++ ";\n" ++ prRules rs where nt' = identCat nt pr [] = [] pr ((p,a):ls) = unlines [unwords [" |", p, "{", a , "}"]] ++ pr ls --Some helper functions. resultName :: String -> String resultName s = "YY_RESULT_" ++ s ++ "_" -- | slightly stronger than the NamedVariable version. -- >>> varName (Cat "Abc") -- "abc_" varName :: Cat -> String varName = \case TokenCat s -> "_" ++ map toLower s c -> (++ "_") . map toLower . identCat . normCat $ c typeName :: String -> String typeName "Ident" = "_IDENT_" typeName "String" = "_STRING_" typeName "Char" = "_CHAR_" typeName "Integer" = "_INTEGER_" typeName "Double" = "_DOUBLE_" typeName x = x BNFC-2.9.5/src/BNFC/Backend/C/CFtoCAbs.hs0000644000000000000000000004427007346545000015367 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE PatternGuards #-} {-# LANGUAGE OverloadedStrings #-} {- BNF Converter: C Abstract syntax Copyright (C) 2004 Author: Michael Pellauer Description : This module generates the C Abstract Syntax tree classes. It generates both a Header file and an Implementation file, and Appel's C method. Author : Michael Pellauer Created : 15 September, 2003 -} module BNFC.Backend.C.CFtoCAbs (cf2CAbs) where import Prelude hiding ((<>)) import Control.Monad.State (State, gets, modify, evalState) import Data.Char ( toLower ) import Data.Either ( lefts ) import Data.Function ( on ) import Data.List ( groupBy, intercalate, intersperse, nub, sort ) import Data.Maybe ( mapMaybe ) import Data.Set ( Set ) import qualified Data.Set as Set import BNFC.CF import BNFC.PrettyPrint import BNFC.Options ( RecordPositions(..) ) import BNFC.Utils ( (+++), unless ) import BNFC.Backend.Common.NamedVariables import BNFC.Backend.C.Common ( posixC ) -- | The result is two files (.H file, .C file) cf2CAbs :: RecordPositions -> String -- ^ Ignored. -> CF -- ^ Grammar. -> (String, String) -- ^ @.H@ file, @.C@ file. cf2CAbs rp _ cf = (mkHFile rp classes datas cf, mkCFile datas cf) where datas :: [Data] datas = getAbstractSyntax cf classes :: [String] classes = nub $ map (identCat . fst) datas {- **** Header (.H) File Functions **** -} -- | Makes the Header file. mkHFile :: RecordPositions -> [String] -> [Data] -> CF -> String mkHFile rp classes datas cf = unlines $ concat [ [ "#ifndef ABSYN_HEADER" , "#define ABSYN_HEADER" , "" ] , posixC , [ "" , "#include /* NULL */" , "#include /* strdup */" , "" , "/* C++ Abstract Syntax Interface.*/" , "" , prTypeDefs user , "/******************** Forward Declarations ***********************/" ] , map prForward classes , [ "/******************** Abstract Syntax Classes ********************/" , "" ] , map (prDataH rp) datas -- Cloning , unless (null classes) $ concat [ cloneComment , map prCloneH classes , [ "" ] ] -- Freeing , unless (null classes) $ concat [ destructorComment , map prFreeH classes , [ "" ] ] , unless (null definedConstructors) [ "/******************** Defined Constructors ***********************/" , "" ] , intersperse "" $ map (prDefH user) definedConstructors , [ "" , "#endif" ] ] where user :: [TokenCat] user = tokenNames cf prForward :: String -> String prForward s = unlines [ "struct " ++ s ++ "_;" , "typedef struct " ++ s ++ "_ *" ++ s ++ ";" ] prCloneH :: String -> String prCloneH s = s ++ " clone_" ++ s ++ "(" ++ s ++ " p);" prFreeH :: String -> String prFreeH s = "void free_" ++ s ++ "(" ++ s ++ " p);" definedConstructors = definitions cf cloneComment :: [String] cloneComment = [ "/*************************** Cloning ******************************/" , "" ] destructorComment :: [String] destructorComment = [ "/******************** Recursive Destructors **********************/" , "" , "/* These free an entire abstract syntax tree" , " * including all subtrees and strings." , " *" , " * Will not work properly if there is sharing in the tree," , " * i.e., when some pointers are aliased. In this case" , " * it will attempt to free the same memory twice." , " */" , "" ] -- | For @define@d constructors, make a CPP definition. -- -- >>> prDefH [] (Define "iSg" [("i",undefined)] (App "ICons" undefined [Var "i", App "INil" undefined []]) undefined) -- "#define make_iSg(i) \\\n make_ICons (i, make_INil())" -- -- >>> prDefH [] (Define "snoc" (map (,undefined) ["xs","x"]) (App "Cons" undefined [Var "x", Var "xs"]) undefined) -- "#define make_snoc(xs,x) \\\n make_Cons (x, xs)" -- prDefH :: [TokenCat] -- ^ Names of the token constructors (silent in C backend). -> Define -> String prDefH tokenCats (Define fun args e _t) = concat [ "#define make_", f, "(", intercalate "," xs, ") \\\n ", prExp e `evalState` mempty ] where f = funName fun xs = map fst args toCat :: Base -> Cat toCat = catOfType $ specialCatsP ++ tokenCats -- Issue #363, #348. -- Duplicate occurrences of variables in expression need to be cloned, -- because deallocation assumes that the AST is in fact a tree. -- Duplicate occurrences introduce sharing and thus turn it into a DAG -- (directed acyclic graph). -- We maintain a set of variables we have already encountered. prExp :: Exp -> State (Set String) String prExp = \case Var x -> gets (Set.member x) >>= \case -- The first use is not cloned. False -> x <$ modify (Set.insert x) -- Subsequent uses are cloned. True -> case lookup x args of Just t -> return $ cloner (toCat t) x Nothing -> undefined -- impossible -- Andreas, 2021-02-13, issue #338 -- Token categories are just @typedef@s in C, so no constructor needed. App g _ [e] | g `elem` tokenCats -> prExp e App "[]" _ [] -> return "NULL" App g t es -> do es' <- mapM prExp es return $ concat [ "make_", con g t, lparen es, intercalate ", " es', ")" ] LitInt i -> return $ show i LitDouble d -> return $ show d LitChar c -> return $ show c LitString s -> return $ concat [ "strdup(", show s, ")" ] -- so that free() does not crash! con g ~(FunT _ts t) | isConsFun g = identType t | otherwise = g -- If more than one argument, or complex argument, put space before opening parenthesis. lparen = \case _:_:_ -> " (" [App _ _ (_:_)] -> " (" _ -> "(" -- | Prints struct definitions for all categories. prDataH :: RecordPositions -> Data -> String prDataH rp (cat, rules) | isList cat = unlines [ "struct " ++ c' ++ "_" , "{" , " " ++ mem +++ varName mem ++ ";" , " " ++ c' +++ varName c' ++ ";" , "};" , "" , c' ++ " make_" ++ c' ++ "(" ++ mem ++ " p1, " ++ c' ++ " p2);" ] | otherwise = unlines $ concat [ [ "struct " ++ identCat cat ++ "_" , "{" ] , [ " int line_number, char_number;" | rp == RecordPositions ] , [ " enum { " ++ intercalate ", " (map prKind rules) ++ " } kind;" , " union" , " {" , concatMap prUnion rules ++ " } u;" , "};" , "" ] , concatMap (prRuleH cat) rules ] where c' = identCat (normCat cat) mem = identCat (normCatOfList cat) prKind (fun, _) = "is_" ++ fun prUnion (_, []) = "" prUnion (fun, cats) = " struct { " ++ (render $ prInstVars (getVars cats)) ++ " } " ++ (memName fun) ++ ";\n" -- | Interface definitions for rules vary on the type of rule. prRuleH :: Cat -> (Fun, [Cat]) -> [String] prRuleH c (fun, cats) | isNilFun fun || isOneFun fun || isConsFun fun = [] -- these are not represented in the AbSyn | otherwise = return $ concat [ catToStr c, " make_", fun, "(", prParamsH (getVars cats), ");" ] where prParamsH :: [(String, a)] -> String prParamsH [] = "void" prParamsH ps = intercalate ", " $ zipWith par ps [0::Int ..] where par (t, _) n = t ++ " p" ++ show n -- typedefs in the Header make generation much nicer. prTypeDefs :: [String] -> String prTypeDefs user = unlines $ concat [ [ "/******************** TypeDef Section ********************/" , "" , "typedef int Integer;" , "typedef char Char;" , "typedef double Double;" , "typedef char* String;" , "typedef char* Ident;" ] , map prUserDef user ] where prUserDef s = "typedef char* " ++ s ++ ";" -- | A class's instance variables. Print the variables declaration by grouping -- together the variables of the same type. -- >>> prInstVars [("A", 1)] -- A a_1; -- >>> prInstVars [("A",1),("A",2),("B",1)] -- A a_1, a_2; B b_1; prInstVars :: [IVar] -> Doc prInstVars = hsep . map prInstVarsOneType . groupBy ((==) `on` fst) . sort where prInstVarsOneType ivars = text (fst (head ivars)) <+> hsep (punctuate comma (map prIVar ivars)) <> semi prIVar (s, i) = text (varName s) <> text (showNum i) {- **** Implementation (.C) File Functions **** -} -- | Makes the .C file mkCFile :: [Data] -> CF -> String mkCFile datas _cf = concat [ header , render $ vsep $ concatMap prDataC datas , unlines [ "", "" ] -- Cloning , unlines cloneComment , unlines $ concatMap prCloneC datas -- Freeing , unlines destructorComment , unlines $ concatMap prDestructorC datas ] where header = unlines [ "/* C Abstract Syntax Implementation. */" , "" , "#include " , "#include " , "#include \"Absyn.h\"" , "" ] -- | -- >>> text $ unlines $ prCloneC (Cat "Exp", [("EInt", [TokenCat "Integer"]), ("EAdd", [Cat "Exp", Cat "Exp"])]) -- Exp clone_Exp(Exp p) -- { -- switch(p->kind) -- { -- case is_EInt: -- return make_EInt (p->u.eint_.integer_); -- -- case is_EAdd: -- return make_EAdd -- ( clone_Exp(p->u.eadd_.exp_1) -- , clone_Exp(p->u.eadd_.exp_2) -- ); -- -- default: -- fprintf(stderr, "Error: bad kind field when cloning Exp!\n"); -- exit(1); -- } -- } -- -- prCloneC :: Data -> [String] prCloneC (cat, rules) | isList cat = [ cl ++ " clone_" ++ cl ++ "("++ cl +++ vname ++ ")" , "{" , " if (" ++ vname ++ ")" , " {" , " /* clone of non-empty list */" , render $ prettyList 6 (text $ " return make_" ++ cl) "(" ");" "," [ text $ visitMember , text $ "clone_" ++ cl ++ "(" ++ vname ++ "->" ++ vname ++ "_)" ] , " }" , " else return NULL; /* clone of empty list */" , "}" , "" ] | otherwise = concat [ [ cl ++ " clone_" ++ cl ++ "(" ++ cl ++ " p)" , "{" , " switch(p->kind)" , " {" ] , concatMap prCloneRule rules , [ " default:" , " fprintf(stderr, \"Error: bad kind field when cloning " ++ cl ++ "!\\n\");" , " exit(1);" , " }" , "}" , "" ] ] where cl = identCat cat vname = map toLower cl visitMember :: String visitMember = cloner el $ vname ++ "->" ++ member ++ "_" where el = normCatOfList cat member = map toLower $ identCat el prCloneRule :: (String, [Cat]) -> [String] prCloneRule (fun, cats) | not (isCoercion fun) = [ " case is_" ++ fnm ++ ":" , render $ prettyList 6 (text $ " return make_" ++ fnm) "(" ");\n" "," $ map (text . prCloneCat fnm) $ lefts $ numVars $ map Left cats ] where fnm = funName fun prCloneRule _ = [] -- | This goes on to recurse to the instance variables. prCloneCat :: String -> (Cat, Doc) -> String prCloneCat fnm (cat, nt) = cloner cat member where member = concat [ "p->u.", map toLower fnm, "_.", render nt ] -- | Clone or not depending on the category. -- Only pointers need to be cloned. -- cloner :: Cat -> String -> String cloner cat x = case cat of TokenCat c | c `elem` ["Char", "Double", "Integer"] -> x | otherwise -> "strdup" ++ parens x _ -> "clone_" ++ identCat (normCat cat) ++ parens x where parens = ("(" ++) . (++ ")") -- | -- >>> text $ unlines $ prDestructorC (Cat "Exp", [("EInt", [TokenCat "Integer"]), ("EAdd", [Cat "Exp", Cat "Exp"])]) -- void free_Exp(Exp p) -- { -- switch(p->kind) -- { -- case is_EInt: -- break; -- -- case is_EAdd: -- free_Exp(p->u.eadd_.exp_1); -- free_Exp(p->u.eadd_.exp_2); -- break; -- -- default: -- fprintf(stderr, "Error: bad kind field when freeing Exp!\n"); -- exit(1); -- } -- free(p); -- } -- -- prDestructorC :: Data -> [String] prDestructorC (cat, rules) | isList cat = concat [ [ "void free_" ++ cl ++ "("++ cl +++ vname ++ ")" , "{" , " if (" ++ vname ++ ")" , " {" ] , map (" " ++) visitMember , [ " free_" ++ cl ++ "(" ++ vname ++ "->" ++ vname ++ "_);" , " free(" ++ vname ++ ");" , " }" , "}" , "" ] ] | otherwise = concat [ [ "void free_" ++ cl ++ "(" ++ cl ++ " p)" , "{" , " switch(p->kind)" , " {" ] , concatMap prFreeRule rules , [ " default:" , " fprintf(stderr, \"Error: bad kind field when freeing " ++ cl ++ "!\\n\");" , " exit(1);" , " }" , " free(p);" , "}" , "" ] ] where cl = identCat cat vname = map toLower cl visitMember = case ecat of TokenCat c | c `elem` ["Char", "Double", "Integer"] -> [] | otherwise -> [ "free" ++ rest ] _ -> [ "free_" ++ ecl ++ rest ] where rest = "(" ++ vname ++ "->" ++ member ++ "_);" member = map toLower ecl ecl = identCat ecat ecat = normCatOfList cat prFreeRule :: (String, [Cat]) -> [String] prFreeRule (fun, cats) | not (isCoercion fun) = concat [ [ " case is_" ++ fnm ++ ":" ] , map (" " ++) $ mapMaybe (prFreeCat fnm) $ lefts $ numVars $ map Left cats , [ " break;" , "" ] ] where fnm = funName fun prFreeRule _ = [] -- | This goes on to recurse to the instance variables. prFreeCat :: String -> (Cat, Doc) -> Maybe String prFreeCat _fnm (TokenCat c, _nt) | c `elem` ["Char", "Double", "Integer"] = Nothing -- Only pointer need to be freed. prFreeCat fnm (cat, nt) = Just $ concat [ maybe ("free_" ++ identCat (normCat cat)) (const "free") $ maybeTokenCat cat , "(p->u." , map toLower fnm , "_.", render nt, ");" ] prDataC :: Data -> [Doc] prDataC (cat, rules) = map (prRuleC cat) rules -- | Classes for rules vary based on the type of rule. -- -- * Empty list constructor, these are not represented in the AbSyn -- -- >>> prRuleC (ListCat (Cat "A")) ("[]", [Cat "A", Cat "B", Cat "B"]) -- -- -- * Linked list case. These are all built-in list functions. -- Later we could include things like lookup, insert, delete, etc. -- -- >>> prRuleC (ListCat (Cat "A")) ("(:)", [Cat "A", Cat "B", Cat "B"]) -- /******************** ListA ********************/ -- -- ListA make_ListA(A p1, ListA p2) -- { -- ListA tmp = (ListA) malloc(sizeof(*tmp)); -- if (!tmp) -- { -- fprintf(stderr, "Error: out of memory when allocating ListA!\n"); -- exit(1); -- } -- tmp->a_ = p1; -- tmp->lista_ = p2; -- return tmp; -- } -- -- * Standard rule -- -- >>> prRuleC (Cat "A") ("funa", [Cat "A", Cat "B", Cat "B"]) -- /******************** funa ********************/ -- -- A make_funa(A p1, B p2, B p3) -- { -- A tmp = (A) malloc(sizeof(*tmp)); -- if (!tmp) -- { -- fprintf(stderr, "Error: out of memory when allocating funa!\n"); -- exit(1); -- } -- tmp->kind = is_funa; -- tmp->u.funa_.a_ = p1; -- tmp->u.funa_.b_1 = p2; -- tmp->u.funa_.b_2 = p3; -- return tmp; -- } prRuleC :: Cat -> (String, [Cat]) -> Doc prRuleC _ (fun, _) | isNilFun fun || isOneFun fun = empty prRuleC cat@(ListCat c') (fun, _) | isConsFun fun = vcat' [ "/******************** " <> c <> " ********************/" , "" , c <+> "make_" <> c <> parens (text m <+> "p1" <> "," <+> c <+> "p2") , lbrace , nest 4 $ vcat' [ c <+> "tmp = (" <> c <> ") malloc(sizeof(*tmp));" , "if (!tmp)" , lbrace , nest 4 $ vcat' [ "fprintf(stderr, \"Error: out of memory when allocating " <> c <> "!\\n\");" , "exit(1);" ] , rbrace , text $ "tmp->" ++ m' ++ " = " ++ "p1;" , "tmp->" <> v <+> "=" <+> "p2;" , "return tmp;" ] , rbrace ] where icat = identCat (normCat cat) c = text icat v = text (map toLower icat ++ "_") m = identCat (normCat c') m' = map toLower m ++ "_" prRuleC c (fun, cats) = vcat' [ text $ "/******************** " ++ fun ++ " ********************/" , "" , prConstructorC c fun vs cats ] where vs = getVars cats -- | The constructor just assigns the parameters to the corresponding instance -- variables. -- >>> prConstructorC (Cat "A") "funa" [("A",1),("B",2)] [Cat "O", Cat "E"] -- A make_funa(O p1, E p2) -- { -- A tmp = (A) malloc(sizeof(*tmp)); -- if (!tmp) -- { -- fprintf(stderr, "Error: out of memory when allocating funa!\n"); -- exit(1); -- } -- tmp->kind = is_funa; -- tmp->u.funa_.a_ = p1; -- tmp->u.funa_.b_2 = p2; -- return tmp; -- } prConstructorC :: Cat -> String -> [IVar] -> [Cat] -> Doc prConstructorC cat c vs cats = vcat' [ text (cat' ++ " make_" ++ c) <> parens args , lbrace , nest 4 $ vcat' [ text $ cat' ++ " tmp = (" ++ cat' ++ ") malloc(sizeof(*tmp));" , text "if (!tmp)" , lbrace , nest 4 $ vcat' [ text ("fprintf(stderr, \"Error: out of memory when allocating " ++ c ++ "!\\n\");") , text "exit(1);" ] , rbrace , text $ "tmp->kind = is_" ++ c ++ ";" , prAssigns c vs params , text "return tmp;" ] , rbrace ] where cat' = identCat (normCat cat) (types, params) = unzip (prParams cats) args = hsep $ punctuate comma $ zipWith (<+>) types params -- | Prints the constructor's parameters. Returns pairs of type * name -- >>> prParams [Cat "O", Cat "E"] -- [(O,p1),(E,p2)] prParams :: [Cat] -> [(Doc, Doc)] prParams = zipWith prParam [1::Int ..] where prParam n c = (text (identCat c), text ("p" ++ show n)) -- | Prints the assignments of parameters to instance variables. -- >>> prAssigns "A" [("A",1),("B",2)] [text "abc", text "def"] -- tmp->u.a_.a_ = abc; -- tmp->u.a_.b_2 = def; prAssigns :: String -> [IVar] -> [Doc] -> Doc prAssigns c vars params = vcat $ zipWith prAssign vars params where prAssign (t,n) p = text ("tmp->u." ++ c' ++ "_." ++ vname t n) <+> char '=' <+> p <> semi vname t n | n == 1, [_] <- filter ((t ==) . fst) vars = varName t | otherwise = varName t ++ showNum n c' = map toLower c {- **** Helper Functions **** -} memName :: String -> String memName s = map toLower s ++ "_" BNFC-2.9.5/src/BNFC/Backend/C/CFtoCPrinter.hs0000644000000000000000000004335307346545000016306 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {- BNF Converter: C Pretty Printer printer Copyright (C) 2004 Author: Michael Pellauer Description : This module generates the C Pretty Printer. It also generates the "show" method for printing an abstract syntax tree. The generated files use the Visitor design pattern. Author : Michael Pellauer Created : 10 August, 2003 Modified : 3 September, 2003 * Added resizable buffers -} module BNFC.Backend.C.CFtoCPrinter (cf2CPrinter) where import Prelude hiding ((<>)) import Data.Bifunctor ( second ) import Data.Char ( toLower ) import Data.Either ( lefts ) import Data.Foldable ( toList ) import Data.List ( nub ) import BNFC.CF import BNFC.PrettyPrint import BNFC.Utils ( (+++), uniqOn, unless, unlessNull ) import BNFC.Backend.Common import BNFC.Backend.Common.NamedVariables import BNFC.Backend.Common.StrUtils (renderCharOrString) -- | Produces (.h file, .c file). cf2CPrinter :: CF -> (String, String) cf2CPrinter cf = (mkHFile cf groups, mkCFile cf groups) where groups = fixCoercions $ filterOutDefs $ ruleGroupsInternals cf filterOutDefs = map $ second $ filter $ not . isDefinedRule . funRule {- **** Header (.h) File Methods **** -} -- | Make the Header File. mkHFile :: CF -> [(Cat,[Rule])] -> String mkHFile cf groups = unlines [ header, concatMap prPrints eps, concatMap prPrintDataH groups, concatMap prShows eps, concatMap prShowDataH groups, footer ] where eps = nub . map normCat . toList $ allEntryPoints cf prPrints s | normCat s == s = "char *print" ++ s' ++ "(" ++ s' ++ " p);\n" where s' = identCat s prPrints _ = "" prShows s | normCat s == s = "char *show" ++ s' ++ "(" ++ s' ++ " p);\n" where s' = identCat s prShows _ = "" header = unlines [ "#ifndef PRINTER_HEADER", "#define PRINTER_HEADER", "", "#include \"Absyn.h\"", "", "/* Certain applications may improve performance by changing the buffer size */", "#define BUFFER_INITIAL 2048", "/* You may wish to change _L_PAREN or _R_PAREN */", "#define _L_PAREN '('", "#define _R_PAREN ')'", "", "/* The following are simple heuristics for rendering terminals */", "/* You may wish to change them */", "void renderCC(Char c);", "void renderCS(String s);", "void indent(void);", "void backup(void);", "void onEmptyLine(void);", "void removeTrailingSpaces(void);", "void removeTrailingWhitespace(void);", "" ] footer = unlines $ ["void pp" ++ t ++ "(String s, int i);" | t <- tokenNames cf] ++ ["void sh" ++ t ++ "(String s);" | t <- tokenNames cf] ++ [ "void ppInteger(Integer n, int i);", "void ppDouble(Double d, int i);", "void ppChar(Char c, int i);", "void ppString(String s, int i);", "void ppIdent(String s, int i);", "void shInteger(Integer n);", "void shDouble(Double d);", "void shChar(Char c);", "void shString(String s);", "void shIdent(String s);", "void bufEscapeS(const char *s);", "void bufEscapeC(const char c);", "void bufAppendS(const char *s);", "void bufAppendC(const char c);", "void bufReset(void);", "void resizeBuffer(void);", "", "#endif" ] -- | Prints all the required method names and their parameters. prPrintDataH :: (Cat, [Rule]) -> String prPrintDataH (cat, _) = concat ["void pp", cl, "(", cl, " p, int i);\n"] where cl = identCat (normCat cat) -- | Prints all the required method names and their parameters. prShowDataH :: (Cat, [Rule]) -> String prShowDataH (cat, _) = concat ["void sh", cl, "(", cl, " p);\n"] where cl = identCat (normCat cat) {- **** Implementation (.C) File Methods **** -} -- | This makes the .C file by a similar method. mkCFile :: CF -> [(Cat,[Rule])] -> String mkCFile cf groups = concat [ header, prRender, concatMap prPrintFun eps, concatMap prShowFun eps, concatMap prPrintData groups, printBasics, printTokens, concatMap prShowData groups, showBasics, showTokens, footer ] where eps = nub . map normCat . toList $ allEntryPoints cf header = unlines [ "/*** Pretty Printer and Abstract Syntax Viewer ***/", "", "#include /* isspace */", "#include /* size_t */", "#include ", "#include ", "#include ", "#include \"Printer.h\"", "", "#define INDENT_WIDTH 2", "", "int _n_;", "char *buf_;", "size_t cur_;", "size_t buf_size;", "" ] printBasics = unlines [ "void ppInteger(Integer n, int i)", "{", -- https://stackoverflow.com/questions/10536207/ansi-c-maximum-number-of-characters-printing-a-decimal-int -- A buffer of 20 characters is sufficient to print the decimal representation -- of a 64bit integer. Might not be needed here, but does not hurt. " char tmp[20];", " sprintf(tmp, \"%d\", n);", " renderS(tmp);", "}", "void ppDouble(Double d, int i)", "{", -- https://stackoverflow.com/questions/1701055/what-is-the-maximum-length-in-chars-needed-to-represent-any-double-value -- Recommended buffer size is 24 for doubles (IEEE-754): -- (*) 17 digits for the decimal representation of the integral part -- (*) 5 digits for the exponent " char tmp[24];", " sprintf(tmp, \"%.15g\", d);", " renderS(tmp);", "}", "void ppChar(Char c, int i)", "{", " bufAppendC('\\'');", " bufEscapeC(c);", " bufAppendC('\\'');", " bufAppendC(' ');", "}", "void ppString(String s, int i)", "{", " bufAppendC('\\\"');", " bufEscapeS(s);", " bufAppendC('\\\"');", " bufAppendC(' ');", "}", "void ppIdent(String s, int i)", "{", " renderS(s);", "}", "" ] printTokens = unlines [unlines [ "void pp" ++ t ++ "(String s, int i)", "{", " renderS(s);", "}", "" ] | t <- tokenNames cf ] showBasics = unlines [ "void shInteger(Integer i)", "{", " char tmp[20];", " sprintf(tmp, \"%d\", i);", " bufAppendS(tmp);", "}", "void shDouble(Double d)", "{", " char tmp[24];", " sprintf(tmp, \"%.15g\", d);", " bufAppendS(tmp);", "}", "void shChar(Char c)", "{", " bufAppendC('\\'');", " bufEscapeC(c);", " bufAppendC('\\'');", "}", "void shString(String s)", "{", " bufAppendC('\\\"');", " bufEscapeS(s);", " bufAppendC('\\\"');", "}", "void shIdent(String s)", "{", " bufAppendC('\\\"');", " bufAppendS(s);", " bufAppendC('\\\"');", "}", "" ] showTokens = unlines [unlines [ "void sh" ++ t ++ "(String s)", "{", " bufAppendC('\\\"');", " bufEscapeS(s);", " bufAppendC('\\\"');", "}", "" ] | t <- tokenNames cf ] footer = unlines [ "void bufEscapeS(const char *s)", "{", " if (s) while (*s) bufEscapeC(*s++);", "}", "void bufEscapeC(const char c)", "{", " switch(c)", " {", " case '\\f': bufAppendS(\"\\\\f\" ); break;", " case '\\n': bufAppendS(\"\\\\n\" ); break;", " case '\\r': bufAppendS(\"\\\\r\" ); break;", " case '\\t': bufAppendS(\"\\\\t\" ); break;", " case '\\v': bufAppendS(\"\\\\v\" ); break;", " case '\\\\': bufAppendS(\"\\\\\\\\\"); break;", " case '\\'': bufAppendS(\"\\\\'\" ); break;", " case '\\\"': bufAppendS(\"\\\\\\\"\"); break;", " default: bufAppendC(c);", " }", "}", "", "void bufAppendS(const char *s)", "{", " size_t len = strlen(s);", " size_t n;", " while (cur_ + len >= buf_size)", " {", " buf_size *= 2; /* Double the buffer size */", " resizeBuffer();", " }", " for(n = 0; n < len; n++)", " {", " buf_[cur_ + n] = s[n];", " }", " cur_ += len;", " buf_[cur_] = 0;", "}", "void bufAppendC(const char c)", "{", " if (cur_ + 1 >= buf_size)", " {", " buf_size *= 2; /* Double the buffer size */", " resizeBuffer();", " }", " buf_[cur_] = c;", " cur_++;", " buf_[cur_] = 0;", "}", "void bufReset(void)", "{", " cur_ = 0;", " buf_size = BUFFER_INITIAL;", " resizeBuffer();", " memset(buf_, 0, buf_size);", "}", "void resizeBuffer(void)", "{", " char *temp = (char *) malloc(buf_size);", " if (!temp)", " {", " fprintf(stderr, \"Error: Out of memory while attempting to grow buffer!\\n\");", " exit(1);", " }", " if (buf_)", " {", " strncpy(temp, buf_, buf_size); /* peteg: strlcpy is safer, but not POSIX/ISO C. */", " free(buf_);", " }", " buf_ = temp;", "}", "char *buf_;", "size_t cur_, buf_size;", "" ] {- **** Pretty Printer Methods **** -} -- | An entry point to the printer. prPrintFun :: Cat -> String prPrintFun ep | normCat ep == ep = unlines [ "char *print" ++ ep' ++ "(" ++ ep' ++ " p)", "{", " _n_ = 0;", " bufReset();", " pp" ++ ep' ++ "(p, 0);", " return buf_;", "}" ] where ep' = identCat ep prPrintFun _ = "" -- Generates methods for the Pretty Printer prPrintData :: (Cat, [Rule]) -> String prPrintData (cat, rules) | isList cat = unlines $ concat [ [ "void pp" ++ cl ++ "("++ cl +++ vname ++ ", int i)" , "{" , " if (" ++ vname +++ "== 0)" , " { /* nil */" ] , unlessNull (swRules isNilFun) $ \ docs -> [ render $ nest 4 $ vcat docs ] , [ " }" ] , unlessNull (swRules isOneFun) $ \ docs -> [ " else if (" ++ pre ++ vname ++ "_ == 0)" , " { /* last */" , render $ nest 4 $ vcat docs , " }" ] , unlessNull (swRules isConsFun) $ \ docs -> [ " else" , " { /* cons */" , render $ nest 4 $ vcat docs , " }" ] , [ "}" , "" ] ] | otherwise = unlines $ concat [ [ "void pp" ++ cl ++ "(" ++ cl ++ " p, int _i_)" , "{" , " switch(p->kind)" , " {" ] , concatMap prPrintRule rules , [ " default:" , " fprintf(stderr, \"Error: bad kind field when printing " ++ catToStr cat ++ "!\\n\");" , " exit(1);" , " }" , "}" , "" ] ] where cl = identCat (normCat cat) vname = map toLower cl pre = vname ++ "->" prules = sortRulesByPrecedence rules swRules f = switchByPrecedence "i" $ map (second $ sep . map text . prPrintRule_ pre) $ uniqOn fst $ filter f prules -- Discard duplicates, can only handle one rule per precedence. -- | Helper function that call the right c function (renderC or renderS) to -- render a literal string. -- -- >>> renderX "," -- renderC(',') -- -- >>> renderX "---" -- renderS("---") renderX :: String -> Doc renderX sep' = "render" <> char sc <> parens (text sep) where (sc, sep) = renderCharOrString sep' -- | Pretty Printer methods for a rule. prPrintRule :: Rule -> [String] prPrintRule r@(Rule fun _ _ _) = unless (isCoercion fun) $ concat [ [ " case is_" ++ fnm ++ ":" , " if (_i_ > " ++ show p ++ ") renderC(_L_PAREN);" ] , map (" " ++) $ prPrintRule_ pre r , [ " if (_i_ > " ++ show p ++ ") renderC(_R_PAREN);" , " break;" , "" ] ] where p = precRule r fnm = funName fun pre = concat [ "p->u.", map toLower fnm, "_." ] -- | Only render the rhs (items) of a rule. prPrintRule_ :: IsFun a => String -> Rul a -> [String] prPrintRule_ pre (Rule _ _ items _) = map (prPrintItem pre) $ numVars items -- | This goes on to recurse to the instance variables. prPrintItem :: String -> Either (Cat, Doc) String -> String prPrintItem pre = \case Right t -> render (renderX t) ++ ";" Left (cat, nt) -> concat [ "pp" , maybe (identCat $ normCat cat) basicFunName $ maybeTokenCat cat , "(", pre, render nt, ", ", show (precCat cat), ");" ] {- **** Abstract Syntax Tree Printer **** -} -- | An entry point to the printer. prShowFun :: Cat -> String prShowFun ep | normCat ep == ep = unlines [ "char *show" ++ ep' ++ "(" ++ ep' ++ " p)", "{", " _n_ = 0;", " bufReset();", " sh" ++ ep' ++ "(p);", " return buf_;", "}" ] where ep' = identCat ep prShowFun _ = "" -- | This prints the functions for Abstract Syntax tree printing. prShowData :: (Cat, [Rule]) -> String prShowData (cat, rules) = unlines $ if isList cat then [ "void sh" ++ cl ++ "("++ cl +++ vname ++ ")", "{", " bufAppendC('[');", " while(" ++ vname +++ "!= 0)", " {", " if (" ++ vname ++ "->" ++ vname ++ "_)", " {", visitMember, " bufAppendS(\", \");", " " ++ vname +++ "=" +++ vname ++ "->" ++ vname ++ "_;", " }", " else", " {", visitMember, " " ++ vname ++ " = 0;", " }", " }", " bufAppendC(']');", "}", "" ] -- Not a list: else [ "void sh" ++ cl ++ "(" ++ cl ++ " p)", "{", " switch(p->kind)", " {", concatMap prShowRule rules, " default:", " fprintf(stderr, \"Error: bad kind field when showing " ++ catToStr cat ++ "!\\n\");", " exit(1);", " }", "}\n" ] where cl = identCat (normCat cat) ecl = identCat (normCatOfList cat) vname = map toLower cl member = map toLower ecl visitMember = " sh" ++ ecl ++ "(" ++ vname ++ "->" ++ member ++ "_);" -- | Pretty Printer methods for a rule. prShowRule :: Rule -> String prShowRule (Rule fun _ cats _) | not (isCoercion fun) = unlines [ " case is_" ++ f ++ ":", " " ++ lparen, " bufAppendS(\"" ++ f ++ "\");\n", " " ++ optspace, cats', " " ++ rparen, " break;" ] where f = funName fun (optspace, lparen, rparen) = if allTerms cats then ("","","") else (" bufAppendC(' ');\n", " bufAppendC('(');\n"," bufAppendC(')');\n") cats' = if allTerms cats then "" else concat (insertSpaces (map (prShowCat f) (lefts $ numVars cats))) insertSpaces [] = [] insertSpaces (x:[]) = [x] insertSpaces (x:xs) = if x == "" then insertSpaces xs else x : " bufAppendC(' ');\n" : insertSpaces xs allTerms [] = True allTerms (Left _:_) = False allTerms (_:zs) = allTerms zs prShowRule _ = "" prShowCat :: Fun -> (Cat, Doc) -> String prShowCat fnm (cat, nt) = concat [ " sh" , maybe (identCat $ normCat cat) basicFunName $ maybeTokenCat cat , "(p->u." , map toLower fnm , "_." , render nt , ");\n" ] {- **** Helper Functions Section **** -} -- | The visit-function name of a basic type. basicFunName :: TokenCat -> String basicFunName k | k `elem` baseTokenCatNames = k | otherwise = "Ident" -- | An extremely simple @renderC@ for terminals. prRender :: String prRender = unlines $ concat [ [ "/* You may wish to change the renderC functions */", "void renderC(Char c)", "{", " if (c == '{')", " {", " onEmptyLine();", " bufAppendC(c);", " _n_ = _n_ + INDENT_WIDTH;", " bufAppendC('\\n');", " indent();", " }", " else if (c == '(' || c == '[')", " bufAppendC(c);", " else if (c == ')' || c == ']')", " {", " removeTrailingWhitespace();", " bufAppendC(c);", " bufAppendC(' ');", " }", " else if (c == '}')", " {", " _n_ = _n_ - INDENT_WIDTH;", " onEmptyLine();", " bufAppendC(c);", " bufAppendC('\\n\');", " indent();", " }", " else if (c == ',')", " {", " removeTrailingWhitespace();", " bufAppendC(c);", " bufAppendC(' ');", " }", " else if (c == ';')", " {", " removeTrailingWhitespace();", " bufAppendC(c);", " bufAppendC('\\n');", " indent();", " }", " else if (c == ' ') bufAppendC(c);", " else if (c == 0) return;", " else", " {", " bufAppendC(c);", " bufAppendC(' ');", " }", "}", "", "int allIsSpace(String s)", "{", " char c;", " while ((c = *s++))", " if (! isspace(c)) return 0;", " return 1;", "}", "", "void renderS(String s)", "{", " if (*s) /* s[0] != '\\0', string s not empty */", " {", " if (allIsSpace(s)) {", " backup();", " bufAppendS(s);", " } else {", " bufAppendS(s);", " bufAppendC(' ');", " }", " }", "}", "", "void indent(void)", "{", " int n = _n_;", " while (--n >= 0)", " bufAppendC(' ');", "}", "", "void backup(void)", "{", " if (cur_ && buf_[cur_ - 1] == ' ')", " buf_[--cur_] = 0;", "}", "" ] , [ "void removeTrailingSpaces()" , "{" , " while (cur_ && buf_[cur_ - 1] == ' ') --cur_;" , " buf_[cur_] = 0;" , "}" , "" , "void removeTrailingWhitespace()" , "{" , " while (cur_ && (buf_[cur_ - 1] == ' ' || buf_[cur_ - 1] == '\\n')) --cur_;" , " buf_[cur_] = 0;" , "}" , "" , "void onEmptyLine()" , "{" , " removeTrailingSpaces();" , " if (cur_ && buf_[cur_ - 1 ] != '\\n') bufAppendC('\\n');" , " indent();" , "}" ] ] BNFC-2.9.5/src/BNFC/Backend/C/CFtoCSkel.hs0000644000000000000000000001366507346545000015564 0ustar0000000000000000{-# LANGUAGE OverloadedStrings #-} {- BNF Converter: C Skeleton generator Copyright (C) 2004 Author: Michael Pellauer Description : This module generates the C Skeleton functions. The generated files follow Appel's case method. Author : Michael Pellauer Created : 9 August, 2003 -} module BNFC.Backend.C.CFtoCSkel (cf2CSkel) where import Prelude hiding ((<>)) import BNFC.CF import BNFC.Utils ( (+++), capitalize ) import BNFC.Backend.Common.NamedVariables import Data.Char ( toLower ) import Data.Either ( lefts ) import Text.PrettyPrint --Produces (.H file, .C file) cf2CSkel :: CF -> (String, String) cf2CSkel cf = (mkHFile cf groups, mkCFile cf groups) where groups = fixCoercions (ruleGroupsInternals cf) {- **** Header (.H) File Functions **** -} --Generates the Header File mkHFile :: CF -> [(Cat,[Rule])] -> String mkHFile cf groups = unlines [ header, concatMap prDataH groups, concatMap prUserH user, footer ] where user = map fst $ tokenPragmas cf header = unlines [ "#ifndef SKELETON_HEADER", "#define SKELETON_HEADER", "/* You might want to change the above name. */", "", "#include \"Absyn.h\"", "" ] prUserH u = "void visit" ++ basicFunNameS u ++ "(" ++ u ++ " p);" footer = unlines [ "void visitIdent(Ident i);", "void visitInteger(Integer i);", "void visitDouble(Double d);", "void visitChar(Char c);", "void visitString(String s);", "", "#endif" ] --Prints out visit functions for a category prDataH :: (Cat, [Rule]) -> String prDataH (cat, _rules) = if isList cat then concat ["void visit", cl, "(", cl, " p);\n"] else "void visit" ++ cl ++ "(" ++ cl ++ " p);\n" where cl = identCat $ normCat cat {- **** Implementation (.C) File Functions **** -} -- | Makes the skeleton's .c File mkCFile :: CF -> [(Cat,[Rule])] -> String mkCFile cf groups = concat [ header , concatMap prData groups , concatMap prUser user , footer ] where user = map fst $ tokenPragmas cf header = unlines [ "/*** Visitor Traversal Skeleton. ***/", "/* This traverses the abstract syntax tree.", " To use, copy Skeleton.h and Skeleton.c to", " new files. */", "", "#include ", "#include ", "", "#include \"Skeleton.h\"", "" ] prUser u = unlines [ "void visit" ++ basicFunNameS u ++ "(" ++ u ++ " p)", "{", " /* Code for " ++ u ++ " Goes Here */", "}" ] footer = unlines [ "void visitIdent(Ident i)", "{", " /* Code for Ident Goes Here */", "}", "void visitInteger(Integer i)", "{", " /* Code for Integer Goes Here */", "}", "void visitDouble(Double d)", "{", " /* Code for Double Goes Here */", "}", "void visitChar(Char c)", "{", " /* Code for Char Goes Here */", "}", "void visitString(String s)", "{", " /* Code for String Goes Here */", "}", "" ] --Visit functions for a category. prData :: (Cat, [Rule]) -> String prData (cat, rules) | isList cat = unlines [ "void visit" ++ cl ++ "("++ cl +++ vname ++ ")", "{", " while(" ++ vname +++ " != 0)", " {", " /* Code For " ++ cl ++ " Goes Here */", " visit" ++ ecl ++ "(" ++ vname ++ "->" ++ member ++ "_);", " " ++ vname +++ "=" +++ vname ++ "->" ++ vname ++ "_;", " }", "}", "" ] -- Not a list: | otherwise = unlines [ "void visit" ++ cl ++ "(" ++ cl ++ " p)", "{", " switch(p->kind)", " {", concatMap (render . prPrintRule) rules, " default:", " fprintf(stderr, \"Error: bad kind field when printing " ++ cl ++ "!\\n\");", " exit(1);", " }", "}\n" ] where cl = identCat $ normCat cat ecl = identCat $ normCatOfList cat vname = map toLower cl member = map toLower ecl -- | Visits all the instance variables of a category. -- >>> let ab = Cat "Ab" -- >>> prPrintRule (Rule "Abc" undefined [Left ab, Left ab] Parsable) -- case is_Abc: -- /* Code for Abc Goes Here */ -- visitAb(p->u.abc_.ab_1); -- visitAb(p->u.abc_.ab_2); -- break; -- -- >>> let ab = TokenCat "Ab" -- >>> prPrintRule (Rule "Abc" undefined [Left ab] Parsable) -- case is_Abc: -- /* Code for Abc Goes Here */ -- visitAb(p->u.abc_.ab_); -- break; -- -- >>> prPrintRule (Rule "Abc" undefined [Left ab, Left ab] Parsable) -- case is_Abc: -- /* Code for Abc Goes Here */ -- visitAb(p->u.abc_.ab_1); -- visitAb(p->u.abc_.ab_2); -- break; -- prPrintRule :: Rule -> Doc prPrintRule (Rule f _c cats _) | isCoercion f = empty | isDefinedRule f = empty | otherwise = nest 2 $ vcat [ text $ "case is_" ++ fun ++ ":" , nest 2 (vcat [ "/* Code for " <> text fun <> " Goes Here */" , cats' , "break;\n" ]) ] where fun = funName f cats' = vcat $ map (prCat fun) (lefts (numVars cats)) -- Prints the actual instance-variable visiting. prCat :: Fun -> (Cat, Doc) -> Doc prCat fnm (cat, vname) = let visitf = "visit" <> if isTokenCat cat then basicFunName cat else text (identCat (normCat cat)) in visitf <> parens ("p->u." <> text v <> "_." <> vname ) <> ";" where v = map toLower fnm -- | The visit-function name of a basic type basicFunName :: Cat -> Doc basicFunName = text . basicFunNameS . catToStr basicFunNameS :: String -> String basicFunNameS = capitalize BNFC-2.9.5/src/BNFC/Backend/C/CFtoFlexC.hs0000644000000000000000000003317707346545000015564 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TupleSections #-} {- BNF Converter: C flex generator Copyright (C) 2004 Author: Michael Pellauer Copyright (C) 2020 Andreas Abel Description : This module generates the Flex file. It is similar to JLex but with a few peculiarities. Author : Michael Pellauer Created : 5 August, 2003 -} module BNFC.Backend.C.CFtoFlexC ( cf2flex , ParserMode(..), parserName, parserPackage, cParser, stlParser, parserHExt , preludeForBuffer -- C code defining a buffer for lexing string literals. , cMacros -- Lexer definitions. , commentStates -- Stream of names for lexer states for comments. , lexComments -- Lexing rules for comments. , lexStrings -- Lexing rules for string literals. , lexChars -- Lexing rules for character literals. ) where import Prelude hiding ( (<>) ) import Data.Bifunctor ( first ) import Data.Char ( isAlphaNum, isAscii ) import Data.List ( isInfixOf ) import Data.Maybe ( fromMaybe, maybeToList ) import qualified Data.Map as Map import System.FilePath ( (<.>) ) import BNFC.CF import BNFC.Backend.C.Common ( posixC ) import BNFC.Backend.C.RegToFlex import BNFC.Backend.Common.NamedVariables import BNFC.Options ( InPackage ) import BNFC.PrettyPrint import BNFC.Utils ( cstring, symbolToName, unless, when ) data ParserMode = CParser Bool String -- ^ @C@ (@False@) or @C++ no STL@ (@True@) mode, with @name@ to use as prefix. | CppParser InPackage String -- ^ @C++@ mode, with optional package name parserName :: ParserMode -> String parserName = \case CParser _ n -> n CppParser p n -> fromMaybe n p parserPackage :: ParserMode -> InPackage parserPackage = \case CParser _ _ -> Nothing CppParser p _ -> p cParser :: ParserMode -> Bool cParser = \case CParser b _ -> not b CppParser _ _ -> False stlParser :: ParserMode -> Bool stlParser = \case CParser _ _ -> False CppParser _ _ -> True parserHExt :: ParserMode -> String parserHExt = \case CParser b _ -> if b then "H" else "h" CppParser _ _ -> "H" -- | Entrypoint. cf2flex :: ParserMode -> CF -> (String, SymMap) -- The environment is reused by the parser. cf2flex mode cf = (, env) $ unlines [ prelude stringLiterals mode , cMacros cf , lexSymbols env1 , restOfFlex (parserPackage mode) cf env , footer -- mode ] where env = Map.fromList env2 env0 = makeSymEnv (cfgSymbols cf) [0 :: Int ..] env1 = env0 ++ makeKwEnv (reservedWords cf) [length env0 ..] env2 = map (first Keyword) env1 ++ map (\ x -> (Tokentype x, "T_" ++ x)) (tokenNames cf) makeSymEnv = zipWith $ \ s n -> (s, '_' : fromMaybe ("SYMB_" ++ show n) (symbolToName s)) makeKwEnv = zipWith $ \ s n -> (s, "_KW_" ++ if all (\ c -> isAlphaNum c && isAscii c) s then s else show n) stringLiterals = isUsedCat cf (TokenCat catString) prelude :: Bool -> ParserMode -> String prelude stringLiterals mode = unlines $ concat [ [ "/* Lexer definition for use with FLex */" , "" -- noinput and nounput are most often unused -- https://stackoverflow.com/questions/39075510/option-noinput-nounput-what-are-they-for , "%option noyywrap noinput nounput" , "%option reentrant bison-bridge bison-locations" , "" ] , when stringLiterals [ "/* Additional data for the lexer: a buffer for lexing string literals. */" , "%option extra-type=\"Buffer\"" , "" ] , maybeToList $ ("%option prefix=\"" ++) . (++ "\"" ) <$> parserPackage mode , when (cParser mode) $ concat -- The following #define needs to be at the top before the automatic #include [ [ "%top{" ] , posixC , [ "}" ] ] , [ "%{" , "#include \"" ++ ("Absyn" <.> h) ++ "\"" , "#include \"" ++ ("Bison" <.> h) ++ "\"" , "" ] , [ "#define initialize_lexer " ++ parserName mode ++ "_initialize_lexer" , "" ] , when stringLiterals $ preludeForBuffer $ "Buffer" <.> h -- https://www.gnu.org/software/bison/manual/html_node/Token-Locations.html -- Flex is responsible for keeping tracking of the yylloc for Bison. -- Flex also doesn't do this automatically so we need this function -- https://stackoverflow.com/a/22125500/425756 , [ "static void update_loc(YYLTYPE* loc, char* text)" , "{" , " loc->first_line = loc->last_line;" , " loc->first_column = loc->last_column;" , " int i = 0;" -- put this here as @for (int i...)@ is only allowed in C99 , " for (; text[i] != '\\0'; ++i) {" , " if (text[i] == '\\n') {" -- Checking for \n is good enough to also support \r\n (but not \r) , " ++loc->last_line;" , " loc->last_column = 0; " , " } else {" , " ++loc->last_column; " , " }" , " }" , "}" , "#define YY_USER_ACTION update_loc(yylloc, yytext);" , "" , "%}" ] ] where h = parserHExt mode -- | Part of the lexer prelude needed when string literals are to be lexed. -- Defines an interface to the Buffer. preludeForBuffer :: String -> [String] preludeForBuffer bufferH = [ "/* BEGIN extensible string buffer */" , "" , "#include \"" ++ bufferH ++ "\"" , "" , "/* The initial size of the buffer to lex string literals. */" , "#define LITERAL_BUFFER_INITIAL_SIZE 1024" , "" , "/* The pointer to the literal buffer. */" , "#define literal_buffer yyextra" , "" , "/* Initialize the literal buffer. */" , "#define LITERAL_BUFFER_CREATE() literal_buffer = newBuffer(LITERAL_BUFFER_INITIAL_SIZE)" , "" , "/* Append characters at the end of the buffer. */" , "#define LITERAL_BUFFER_APPEND(s) bufferAppendString(literal_buffer, s)" , "" , "/* Append a character at the end of the buffer. */" , "#define LITERAL_BUFFER_APPEND_CHAR(c) bufferAppendChar(literal_buffer, c)" , "" , "/* Release the buffer, returning a pointer to its content. */" , "#define LITERAL_BUFFER_HARVEST() releaseBuffer(literal_buffer)" , "" , "/* In exceptional cases, e.g. when reaching EOF, we have to free the buffer. */" , "#define LITERAL_BUFFER_FREE() freeBuffer(literal_buffer)" , "" , "/* END extensible string buffer */" , "" ] -- For now all categories are included. -- Optimally only the ones that are used should be generated. cMacros :: CF -> String cMacros cf = unlines [ "LETTER [a-zA-Z]" , "CAPITAL [A-Z]" , "SMALL [a-z]" , "DIGIT [0-9]" , "IDENT [a-zA-Z0-9'_]" , unwords $ concat [ [ "%START CHAR CHARESC CHAREND STRING ESCAPED" ] , take (numberOfBlockCommentForms cf) commentStates ] , "" , "%% /* Rules. */" ] lexSymbols :: KeywordEnv -> String lexSymbols ss = concatMap transSym ss where transSym (s,r) = "\"" ++ s' ++ "\" \t return " ++ r ++ ";\n" where s' = escapeChars s restOfFlex :: InPackage -> CF -> SymMap -> String restOfFlex _inPackage cf env = unlines $ concat [ [ render $ lexComments $ comments cf , "" ] , userDefTokens , ifC catString $ lexStrings "yylval" "_STRING_" "_ERROR_" , ifC catChar $ lexChars "yylval" "_CHAR_" , ifC catDouble [ "{DIGIT}+\".\"{DIGIT}+(\"e\"(\\-)?{DIGIT}+)? \t yylval->_double = atof(yytext); return _DOUBLE_;" ] , ifC catInteger [ "{DIGIT}+ \t yylval->_int = atoi(yytext); return _INTEGER_;" ] , ifC catIdent [ "{LETTER}{IDENT}* \t yylval->_string = strdup(yytext); return _IDENT_;" ] , [ "[ \\t\\r\\n\\f] \t /* ignore white space. */;" , ". \t return _ERROR_;" , "" , "%% /* Initialization code. */" ] ] where ifC cat s = if isUsedCat cf (TokenCat cat) then s else [] userDefTokens = [ "" ++ printRegFlex exp ++ " \t yylval->_string = strdup(yytext); return " ++ sName name ++ ";" | (name, exp) <- tokenPragmas cf ] where sName n = fromMaybe n $ Map.lookup (Tokentype n) env footer :: String footer = unlines [ "yyscan_t initialize_lexer(FILE *inp)" , "{" , " yyscan_t scanner;" , " if (yylex_init_extra(NULL, &scanner)) return 0;" , " if (inp) yyrestart(inp, scanner);" , " return scanner;" , "}" ] -- | Lexing of strings, converting escaped characters. lexStrings :: String -> String -> String -> [String] lexStrings yylval stringToken errorToken = [ "\"\\\"\" \t LITERAL_BUFFER_CREATE(); BEGIN STRING;" , "\\\\ \t BEGIN ESCAPED;" , "\\\" \t " ++ yylval ++ "->_string = LITERAL_BUFFER_HARVEST(); BEGIN INITIAL; return " ++ stringToken ++ ";" , ". \t LITERAL_BUFFER_APPEND_CHAR(yytext[0]);" , "f \t LITERAL_BUFFER_APPEND_CHAR('\\f'); BEGIN STRING;" , "n \t LITERAL_BUFFER_APPEND_CHAR('\\n'); BEGIN STRING;" , "r \t LITERAL_BUFFER_APPEND_CHAR('\\r'); BEGIN STRING;" , "t \t LITERAL_BUFFER_APPEND_CHAR('\\t'); BEGIN STRING;" , "\\\" \t LITERAL_BUFFER_APPEND_CHAR('\"'); BEGIN STRING;" , "\\\\ \t LITERAL_BUFFER_APPEND_CHAR('\\\\'); BEGIN STRING;" , ". \t LITERAL_BUFFER_APPEND(yytext); BEGIN STRING;" , "<>\t LITERAL_BUFFER_FREE(); return " ++ errorToken ++ ";" ] -- | Lexing of characters, converting escaped characters. lexChars :: String -> String -> [String] lexChars yylval charToken = [ "\"'\" \tBEGIN CHAR;" , "\\\\ \t BEGIN CHARESC;" , "[^'] \t BEGIN CHAREND; " ++ yylval ++ "->_char = yytext[0]; return " ++ charToken ++ ";" , "f \t BEGIN CHAREND; " ++ yylval ++ "->_char = '\\f'; return " ++ charToken ++ ";" , "n \t BEGIN CHAREND; " ++ yylval ++ "->_char = '\\n'; return " ++ charToken ++ ";" , "r \t BEGIN CHAREND; " ++ yylval ++ "->_char = '\\r'; return " ++ charToken ++ ";" , "t \t BEGIN CHAREND; " ++ yylval ++ "->_char = '\\t'; return " ++ charToken ++ ";" , ". \t BEGIN CHAREND; " ++ yylval ++ "->_char = yytext[0]; return " ++ charToken ++ ";" , "\"'\" \t BEGIN INITIAL;" ] -- --------------------------------------------------------------------------- -- Comments -- | Create flex rules for single-line and multi-lines comments. -- The first argument is an optional namespace (for C++); the second -- argument is the set of comment delimiters as returned by BNFC.CF.comments. -- -- This function is only compiling the results of applying either -- lexSingleComment or lexMultiComment on each comment delimiter or pair of -- delimiters. -- -- >>> lexComments ([("{-","-}")],["--"]) -- "--"[^\n]* /* skip */; /* BNFC: comment "--" */ -- "{-" BEGIN COMMENT; /* BNFC: block comment "{-" "-}" */ -- "-}" BEGIN INITIAL; -- . /* skip */; -- [\n] /* skip */; lexComments :: ([(String, String)], [String]) -> Doc lexComments (m,s) = vcat $ concat [ map lexSingleComment s , zipWith lexMultiComment m commentStates ] -- | If we have several block comments, we need different COMMENT lexing states. commentStates :: [String] commentStates = map ("COMMENT" ++) $ "" : map show [1::Int ..] -- | Create a lexer rule for single-line comments. -- The first argument is -- an optional c++ namespace -- The second argument is the delimiter that marks the beginning of the -- comment. -- -- >>> lexSingleComment "--" -- "--"[^\n]* /* skip */; /* BNFC: comment "--" */ -- -- >>> lexSingleComment "\"" -- "\""[^\n]* /* skip */; /* BNFC: comment "\"" */ lexSingleComment :: String -> Doc lexSingleComment c = "" <> cstring c <> "[^\\n]*" <+> "/* skip */;" <+> unless (containsCCommentMarker c) ("/* BNFC: comment" <+> cstring c <+> "*/") containsCCommentMarker :: String -> Bool containsCCommentMarker s = "/*" `isInfixOf` s || "*/" `isInfixOf` s -- | Create a lexer rule for multi-lines comments. -- The first argument is -- an optional c++ namespace -- The second arguments is the pair of delimiter for the multi-lines comment: -- start deleminiter and end delimiter. -- There might be a possible bug here if a language includes 2 multi-line -- comments. They could possibly start a comment with one character and end it -- with another. However this seems rare. -- -- >>> lexMultiComment ("{-", "-}") "COMMENT" -- "{-" BEGIN COMMENT; /* BNFC: block comment "{-" "-}" */ -- "-}" BEGIN INITIAL; -- . /* skip */; -- [\n] /* skip */; -- -- >>> lexMultiComment ("\"'", "'\"") "COMMENT" -- "\"'" BEGIN COMMENT; /* BNFC: block comment "\"'" "'\"" */ -- "'\"" BEGIN INITIAL; -- . /* skip */; -- [\n] /* skip */; lexMultiComment :: (String, String) -> String -> Doc lexMultiComment (b,e) comment = vcat [ "" <> cstring b <+> "BEGIN" <+> text comment <> ";" <+> unless (containsCCommentMarker b || containsCCommentMarker e) ("/* BNFC: block comment" <+> cstring b <+> cstring e <+> "*/") , commentTag <> cstring e <+> "BEGIN INITIAL;" , commentTag <> ". /* skip */;" , commentTag <> "[\\n] /* skip */;" ] where commentTag = text $ "<" ++ comment ++ ">" -- | Helper function that escapes characters in strings. escapeChars :: String -> String escapeChars [] = [] escapeChars ('\\':xs) = '\\' : ('\\' : (escapeChars xs)) escapeChars ('\"':xs) = '\\' : ('\"' : (escapeChars xs)) escapeChars (x:xs) = x : (escapeChars xs) BNFC-2.9.5/src/BNFC/Backend/C/Common.hs0000644000000000000000000000076007346545000015227 0ustar0000000000000000-- | Common definitions for the modules of the C backend. module BNFC.Backend.C.Common ( posixC ) where import Prelude -- | Switch C to language variant that has @strdup@. posixC :: [String] posixC = [ "/* strdup was not in the ISO C standard before 6/2019 (C2x), but in POSIX 1003.1." , " * See: https://en.cppreference.com/w/c/experimental/dynamic/strdup" , " * Setting _POSIX_C_SOURCE to 200809L activates strdup in string.h." , " */" , "#define _POSIX_C_SOURCE 200809L" ] BNFC-2.9.5/src/BNFC/Backend/C/RegToFlex.hs0000644000000000000000000000566007346545000015642 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} module BNFC.Backend.C.RegToFlex (printRegFlex) where -- modified from pretty-printer generated by the BNF converter import Data.Char (ord, showLitChar) import qualified Data.List as List import BNFC.Abs import BNFC.Backend.Common (flexEps) -- the top-level printing method printRegFlex :: Reg -> String printRegFlex = render . prt 0 -- you may want to change render and parenth render :: [String] -> String render = rend (0::Int) where rend i ss = case ss of "[" :ts -> cons "[" $ rend i ts "(" :ts -> cons "(" $ rend i ts t : "," :ts -> cons t $ space "," $ rend i ts t : ")" :ts -> cons t $ cons ")" $ rend i ts t : "]" :ts -> cons t $ cons "]" $ rend i ts t :ts -> space t $ rend i ts _ -> "" cons s t = s ++ t space t s = if null s then t else t ++ s parenth :: [String] -> [String] parenth ss = ["("] ++ ss ++ [")"] -- the printer class does the job class Print a where prt :: Int -> a -> [String] prPrec :: Int -> Int -> [String] -> [String] prPrec i j = if j prPrec i 2 (concat [prt 2 reg0 , prt 3 reg]) RAlt reg0 reg -> prPrec i 1 (concat [prt 1 reg0 , ["|"] , prt 2 reg]) -- Flex does not support set difference. See link for valid patterns. -- https://westes.github.io/flex/manual/Patterns.html#Patterns -- RMinus reg0 reg -> prPrec i 1 (concat [prt 2 reg0 , ["#"] , prt 2 reg]) RMinus reg0 REps -> prt i reg0 -- REps is identity for set difference RMinus RAny (RChar c) -> [ concat [ "[^", escapeChar c, "]" ] ] RMinus RAny (RAlts str) -> [ concat [ "[^", concatMap escapeChar str, "]" ] ] -- FIXME: unicode inside brackets [...] is not accepted by flex -- FIXME: maybe we could add cases for char - RDigit, RLetter etc. RMinus _ _ -> error "Flex does not support general set difference" RStar reg -> concat [ prt 3 reg , ["*"] ] RPlus reg -> concat [ prt 3 reg , ["+"] ] ROpt reg -> concat [ prt 3 reg , ["?"] ] REps -> [ flexEps ] RChar c -> [ escapeChar c ] -- Unicode characters cannot be inside [...] so we use | instead. RAlts str -> prPrec i 1 $ List.intersperse "|" $ map escapeChar str -- RAlts str -> concat [["["], prt 0 $ concatMap escapeChar str, ["]"]] RSeqs str -> prPrec i 2 $ map escapeChar str RDigit -> [ "{DIGIT}" ] RLetter -> [ "{LETTER}" ] RUpper -> [ "{CAPITAL}" ] RLower -> [ "{SMALL}" ] RAny -> [ "." ] -- | Handle special characters in regular expressions. escapeChar :: Char -> String escapeChar c | c `elem` reserved = '\\':[c] | let x = ord c, x >= 256 = [c] -- keep unicode characters -- "\x" ++ showHex x "" | otherwise = showLitChar c "" where reserved :: String reserved = " $+-*=<>[](){}!?.,;:^~|&%#/\\$_@\"" BNFC-2.9.5/src/BNFC/Backend/CPP/0000755000000000000000000000000007346545000013700 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/CPP/Common.hs0000644000000000000000000000445607346545000015475 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} -- | Common to the C++ backends. module BNFC.Backend.CPP.Common where import Data.Char ( isUpper ) import Data.List ( intercalate ) import BNFC.CF import BNFC.TypeChecker import BNFC.Backend.C ( comment ) import BNFC.Backend.CPP.Naming -- | C++ line comment including mode hint for emacs. commentWithEmacsModeHint :: String -> String commentWithEmacsModeHint = comment . ("-*- c++ -*- " ++) -- | C++ code for the @define@d constructors. -- -- @definedRules Nothing@ only prints the header. definedRules :: Maybe ListConstructors -> CF -> String -> String definedRules mlc cf banner | null theLines = [] | otherwise = unlines $ banner : "" : theLines where theLines = map rule $ definitions cf ctx = buildContext cf rule (Define f args e t) = case mlc of Nothing -> header ++ ";" Just lc -> unlines [ header ++ " {" , " return " ++ cppExp lc (map fst args) e ++ ";" , "}" ] where header = cppType t ++ " " ++ sanitizeCpp (funName f) ++ "(" ++ intercalate ", " (map cppArg args) ++ ")" cppType :: Base -> String cppType (ListT (BaseT x)) = "List" ++ x ++ "*" cppType (ListT t) = cppType t ++ "*" cppType (BaseT x) | x `elem` baseTokenCatNames = x | isToken x ctx = "String" | otherwise = x ++ "*" cppArg :: (String, Base) -> String cppArg (x,t) = cppType t ++ " " ++ x ++ "_" cppExp :: ListConstructors -> [String] -> Exp -> String cppExp (LC nil cons) args = loop where loop = \case App "[]" (FunT [] (ListT t)) [] -> fst $ nil t App "(:)" (FunT _ (ListT t)) es -> call (fst $ cons t) es Var x -> x ++ "_" -- argument App t _ [e] | isToken t ctx -> loop e App x _ es | isUpper (head x) -> call ("new " ++ x) es | x `elem` args -> call (x ++ "_") es | otherwise -> call (sanitizeCpp x) es LitInt n -> show n LitDouble x -> show x LitChar c -> show c LitString s -> show s call x es = x ++ "(" ++ intercalate ", " (map loop es) ++ ")" BNFC-2.9.5/src/BNFC/Backend/CPP/Makefile.hs0000644000000000000000000000506207346545000015754 0ustar0000000000000000{-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.CPP.Makefile (makefile) where import BNFC.Backend.Common.Makefile import BNFC.PrettyPrint makefile :: String -> String -> String -> Doc makefile prefix name basename = vcat [ mkVar "CC" "g++ -g" , mkVar "CCFLAGS" "--ansi -W -Wall -Wsign-conversion -Wno-unused-parameter -Wno-unused-function -Wno-unneeded-internal-declaration" , "" , mkVar "FLEX" "flex" , mkVar "FLEX_OPTS" ("-P" ++ prefix) , "" , mkVar "BISON" "bison" , mkVar "BISON_OPTS" ("-t -p" ++ prefix) , "" , mkVar "OBJS" "Absyn.o Buffer.o Lexer.o Parser.o Printer.o" , "" , mkRule ".PHONY" ["clean", "distclean"] [] , mkRule "all" [testName] [] , mkRule "clean" [] -- peteg: don't nuke what we generated - move that to the "vclean" target. [ "rm -f *.o " ++ testName ++ " " ++ unwords [ name ++ e | e <- [".aux", ".log", ".pdf",".dvi", ".ps", ""]] ] , mkRule "distclean" ["clean"] [ "rm -f " ++ unwords [ "Absyn.C", "Absyn.H" , "Buffer.C", "Buffer.H" , "Test.C" , "Bison.H", "Parser.C", "Parser.H", "ParserError.H", name ++ ".y" , "Lexer.C", name ++ ".l" , "Skeleton.C", "Skeleton.H" , "Printer.C", "Printer.H" , basename , name ++ ".tex" ] ] , mkRule testName [ "${OBJS}", "Test.o" ] [ "@echo \"Linking " ++ testName ++ "...\"" , "${CC} ${OBJS} Test.o -o " ++ testName ] , mkRule "Absyn.o" [ "Absyn.C", "Absyn.H" ] [ "${CC} ${CCFLAGS} -c Absyn.C" ] , mkRule "Buffer.o" [ "Buffer.C", "Buffer.H" ] [ "${CC} ${CCFLAGS} -c Buffer.C " ] , mkRule "Lexer.C" [ name ++ ".l" ] [ "${FLEX} ${FLEX_OPTS} -oLexer.C " ++ name ++ ".l" ] , mkRule "Parser.C Bison.H" [ name ++ ".y" ] [ "${BISON} ${BISON_OPTS} " ++ name ++ ".y -o Parser.C" ] , mkRule "Lexer.o" [ "CCFLAGS+=-Wno-sign-conversion" ] [] , mkRule "Lexer.o" [ "Lexer.C", "Bison.H" ] [ "${CC} ${CCFLAGS} -c Lexer.C " ] , mkRule "Parser.o" [ "Parser.C", "Absyn.H", "Bison.H" ] [ "${CC} ${CCFLAGS} -c Parser.C" ] , mkRule "Printer.o" [ "Printer.C", "Printer.H", "Absyn.H" ] [ "${CC} ${CCFLAGS} -c Printer.C" ] , mkRule "Skeleton.o" [ "Skeleton.C", "Skeleton.H", "Absyn.H" ] [ "${CC} ${CCFLAGS} -Wno-unused-parameter -c Skeleton.C" ] , mkRule "Test.o" [ "Test.C", "Parser.H", "Printer.H", "Absyn.H" ] [ "${CC} ${CCFLAGS} -c Test.C" ] ] where testName = "Test" ++ name BNFC-2.9.5/src/BNFC/Backend/CPP/Naming.hs0000644000000000000000000000221107346545000015441 0ustar0000000000000000module BNFC.Backend.CPP.Naming where import BNFC.Utils cReservedWords, cppReservedWords :: [String] cReservedWords = [ "auto", "const", "double", "float", "int", "short", "struct" , "unsigned", "break", "continue", "else", "for", "long", "signed" , "switch", "void", "case", "default", "enum", "goto", "register", "sizeof" , "typedef", "volatile", "char", "do", "extern", "if", "return", "static" , "union", "while" ] cppReservedWords = cReservedWords ++ [ "asm", "dynamic_cast", "namespace" , "reinterpret_cast" , "try", "bool", "explicit", "new", "static_cast" , "typeid", "catch" , "false", "operator", "template", "typename", "class" , "friend" , "private", "this", "using", "const_cast", "inline", "public" , "throw" , "virtual", "delete", "mutable", "protected", "true", "wchar_t" , "and", "bitand", "compl", "not_eq", "or_eq", "xor_eq", "and_eq", "bitor" , "not", "or", "xor" ] mkVariable :: String -> String mkVariable = mkName cppReservedWords SnakeCase sanitizeC :: String -> String sanitizeC = mkName cReservedWords OrigCase sanitizeCpp :: String -> String sanitizeCpp = mkName cppReservedWords OrigCase BNFC-2.9.5/src/BNFC/Backend/CPP/NoSTL.hs0000644000000000000000000001050707346545000015176 0ustar0000000000000000{- BNF Converter: C++ Main file Copyright (C) 2004 Author: Markus Forsberg, Michael Pellauer -} module BNFC.Backend.CPP.NoSTL (makeCppNoStl) where import Data.Foldable (toList) import BNFC.Utils import BNFC.CF import BNFC.Options import BNFC.Backend.Base import BNFC.Backend.C ( bufferH, bufferC, comment, testfileHeader ) import BNFC.Backend.C.CFtoBisonC ( cf2Bison ) import BNFC.Backend.C.CFtoFlexC ( cf2flex, ParserMode(..) ) import BNFC.Backend.CPP.Common ( commentWithEmacsModeHint ) import BNFC.Backend.CPP.Makefile import BNFC.Backend.CPP.NoSTL.CFtoCPPAbs import BNFC.Backend.CPP.STL.CFtoCVisitSkelSTL import BNFC.Backend.CPP.PrettyPrinter import qualified BNFC.Backend.Common.Makefile as Makefile makeCppNoStl :: SharedOptions -> CF -> MkFiles () makeCppNoStl opts cf = do let (hfile, cfile) = cf2CPPAbs name cf mkCppFile "Absyn.H" hfile mkCppFile "Absyn.C" cfile mkCppFile "Buffer.H" bufferH mkCppFile "Buffer.C" $ bufferC "Buffer.H" let (flex, env) = cf2flex parserMode cf mkCppFileWithHint (name ++ ".l") flex mkCppFileWithHint (name ++ ".y") $ cf2Bison (linenumbers opts) parserMode cf env mkCppFile "Parser.H" $ mkHeaderFile (toList $ allEntryPoints cf) let (skelH, skelC) = cf2CVisitSkel False Nothing cf mkCppFile "Skeleton.H" skelH mkCppFile "Skeleton.C" skelC let (prinH, prinC) = cf2CPPPrinter False Nothing cf mkCppFile "Printer.H" prinH mkCppFile "Printer.C" prinC mkCppFile "Test.C" (cpptest cf) Makefile.mkMakefile (optMake opts) $ makefile prefix name where name :: String name = lang opts -- The prefix is a string used by flex and bison -- that is prepended to generated function names. -- It should be a valid C identifier. prefix :: String prefix = snakeCase_ name ++ "_" parserMode :: ParserMode parserMode = CParser True prefix mkCppFile x = mkfile x comment mkCppFileWithHint x = mkfile x commentWithEmacsModeHint cpptest :: CF -> String cpptest cf = unlines $ concat [ testfileHeader , [ "", "#include ", "#include ", "#include \"Parser.H\"", "#include \"Printer.H\"", "#include \"Absyn.H\"", "", "void usage() {", " printf(\"usage: Call with one of the following argument " ++ "combinations:\\n\");", " printf(\"\\t--help\\t\\tDisplay this help message.\\n\");", " printf(\"\\t(no arguments)\\tParse stdin verbosely.\\n\");", " printf(\"\\t(files)\\t\\tParse content of files verbosely.\\n\");", " printf(\"\\t-s (files)\\tSilent mode. Parse content of files " ++ "silently.\\n\");", "}", "", "int main(int argc, char ** argv)", "{", " FILE *input;", " int quiet = 0;", " char *filename = NULL;", "", " if (argc > 1) {", " if (strcmp(argv[1], \"-s\") == 0) {", " quiet = 1;", " if (argc > 2) {", " filename = argv[2];", " } else {", " input = stdin;", " }", " } else {", " filename = argv[1];", " }", " }", "", " if (filename) {", " input = fopen(filename, \"r\");", " if (!input) {", " usage();", " exit(1);", " }", " } else input = stdin;", " /* The default entry point is used. For other options see Parser.H */", " " ++ dat ++ " *parse_tree = p" ++ def ++ "(input);", " if (parse_tree)", " {", " printf(\"\\nParse Successful!\\n\");", " if (!quiet) {", " printf(\"\\n[Abstract Syntax]\\n\");", " ShowAbsyn *s = new ShowAbsyn();", " printf(\"%s\\n\\n\", s->show(parse_tree));", " printf(\"[Linearized Tree]\\n\");", " PrintAbsyn *p = new PrintAbsyn();", " printf(\"%s\\n\\n\", p->print(parse_tree));", " }", " delete(parse_tree);", " return 0;", " }", " return 1;", "}", "" ] ] where cat = firstEntry cf dat = identCat $ normCat cat def = identCat cat mkHeaderFile :: [Cat] -> String mkHeaderFile eps = unlines $ concat [ [ "#ifndef PARSER_HEADER_FILE" , "#define PARSER_HEADER_FILE" , "" , "#include " , "#include \"Absyn.H\"" , "" ] , map mkFunc eps , [ "" , "#endif" ] ] where mkFunc s = identCat (normCat s) ++ "*" +++ "p" ++ identCat s ++ "(FILE *inp);" BNFC-2.9.5/src/BNFC/Backend/CPP/NoSTL/0000755000000000000000000000000007346545000014637 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/CPP/NoSTL/CFtoCPPAbs.hs0000644000000000000000000003125307346545000017023 0ustar0000000000000000{-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TupleSections #-} {- BNF Converter: C++ abstract syntax generator Copyright (C) 2004 Author: Michael Pellauer Description : This module generates the C++ Abstract Syntax tree classes. It generates both a Header file and an Implementation file, and uses the Visitor design pattern. Author : Michael Pellauer Created : 4 August, 2003 Modified : 22 May, 2004 / Antti-Juhani Kaijanaho -} module BNFC.Backend.CPP.NoSTL.CFtoCPPAbs (cf2CPPAbs) where import Prelude hiding ((<>)) import Data.List ( findIndices ) import Data.Char ( toLower ) import Text.PrettyPrint import BNFC.CF import BNFC.TypeChecker ( ListConstructors(..) ) import BNFC.Utils ( (+++), (++++) ) import BNFC.Backend.Common.NamedVariables import BNFC.Backend.Common.OOAbstract import BNFC.Backend.CPP.Common --The result is two files (.H file, .C file) cf2CPPAbs :: String -> CF -> (String, String) cf2CPPAbs _ cf = (mkHFile cf, mkCFile cf) {- **** Header (.H) File Functions **** -} --Makes the Header file. mkHFile :: CF -> String mkHFile cf = unlines [ "#ifndef ABSYN_HEADER", "#define ABSYN_HEADER", "", header, prTypeDefs user, "/******************** Forward Declarations ********************/\n", concatMap prForward classes, "", prVisitor classes, prVisitable, "", "/******************** Abstract Syntax Classes ********************/\n", concatMap (prDataH user) (getAbstractSyntax cf), "", definedRules Nothing cf "/******************** Defined Constructors ********************/", "", "#endif" ] where user = fst (unzip (tokenPragmas cf)) -- includes position tokens -- user = [ name | TokenReg name False _ <- cfgPragmas cf ] -- position tokens are in allClasses already header = "/* ~~~ C++ Abstract Syntax Interface.\n ~~~ */" ca = cf2cabs cf classes = absclasses ca ++ conclasses ca ++ map fst (listtypes ca) -- classes = allClasses (cf2cabs cf) -- includes position tokens prForward s | isProperLabel s = "class " ++ s ++ ";\n" prForward _ = "" --Prints interface classes for all categories. prDataH :: [UserDef] -> Data -> String prDataH user (cat, rules) = case lookup (catToStr cat) rules of Just _ -> concatMap (prRuleH user cat) rules Nothing -> if isList cat then concatMap (prRuleH user cat) rules else unlines [ "class" +++ identCat cat +++ ": public Visitable {" , "public:" , " virtual" +++ identCat cat +++ "*clone() const = 0;" , "};\n" , concatMap (prRuleH user cat) rules ] --Interface definitions for rules vary on the type of rule. prRuleH :: [UserDef] -> Cat -> (Fun, [Cat]) -> String prRuleH user c (fun, cats) = if isNilFun fun || isOneFun fun then "" --these are not represented in the AbSyn else if isConsFun fun then --this is the linked list case. unlines [ "class" +++ c' +++ ": public Visitable", "{", " public:", render $ nest 2 $ prInstVars user vs, " " ++ c' ++ "(const" +++ c' +++ "&);", " " ++ c' ++ " &operator=(const" +++ c' +++ "&);", " " ++ c' ++ "(" ++ (prConstructorH 1 vs) ++ ");", " " ++ c' ++ "(" ++ mem +++ memstar ++ "p);", prDestructorH c', " " ++ c' ++ "* reverse();", " " ++ c' ++ "* reverse(" ++ c' ++ " *l);", " virtual void accept(Visitor *v);", " virtual " ++ c' ++ " *clone() const;", " void swap(" ++ c' +++ "&);", "};" ] else --a standard rule unlines [ "class" +++ fun +++ ": public" +++ super, "{", " public:", render $ nest 2 $ prInstVars user vs, " " ++ fun ++ "(const" +++ fun +++ "&);", " " ++ fun ++ " &operator=(const" +++ fun +++ "&);", " " ++ fun ++ "(" ++ prConstructorH 1 vs ++ ");", prDestructorH fun, " virtual void accept(Visitor *v);", " virtual " +++ fun +++ " *clone() const;", " void swap(" ++ fun +++ "&);", "};\n" ] where vs = getVars cats c' = identCat (normCat c); mem = drop 4 c' memstar = if isBasic user mem then "" else "*" super = if catToStr c == fun then "Visitable" else identCat c prConstructorH :: Int -> [(String, b)] -> String prConstructorH _ [] = "" prConstructorH n [(t,_)] = t +++ optstar t ++ "p" ++ show n prConstructorH n ((t,_):vs) = t +++ optstar t ++ "p" ++ show n ++ ", " ++ prConstructorH (n+1) vs prDestructorH n = " ~" ++ n ++ "();" optstar x = if isBasic user x then "" else "*" prVisitable :: String prVisitable = unlines [ "class Visitable", "{", " public:", -- all classes with virtual methods require a virtual destructor " virtual ~Visitable() {}", " virtual void accept(Visitor *v) = 0;", "};\n" ] prVisitor :: [String] -> String prVisitor fs = unlines [ "/******************** Visitor Interfaces ********************/", "", "class Visitor", "{", " public:", " virtual ~Visitor() {}", concatMap prVisitFun fs, footer ] where footer = unlines [ --later only include used categories " virtual void visitInteger(Integer i) = 0;", " virtual void visitDouble(Double d) = 0;", " virtual void visitChar(Char c) = 0;", " virtual void visitString(String s) = 0;", "};" ] prVisitFun f | isProperLabel f = " virtual void visit" ++ f ++ "(" ++ f ++ " *p) = 0;\n" prVisitFun _ = "" --typedefs in the Header make generation much nicer. prTypeDefs :: [String] -> String prTypeDefs user = unlines [ "/******************** TypeDef Section ********************/", "typedef int Integer;", "typedef char Char;", "typedef double Double;", "typedef char* String;", "typedef char* Ident;", concatMap prUserDef user ] where prUserDef s = "typedef char* " ++ s ++ ";\n" -- | A class's instance variables. -- >>> prInstVars ["MyTokn"] [("MyTokn",1), ("A",1), ("A",2)] -- MyTokn mytokn_1; -- A *a_1, *a_2; prInstVars :: [UserDef] -> [IVar] -> Doc prInstVars _ [] = empty prInstVars user vars@((t,_):_) = text t <+> uniques <> ";" $$ prInstVars user vs' where (uniques, vs') = prUniques t --these functions group the types together nicely prUniques :: String -> (Doc, [IVar]) prUniques t = (prVars (findIndices (\(y,_) -> y == t) vars), remType t vars) prVars = hsep . punctuate comma . map prVar prVar x = let (t,n) = vars !! x in varLinkName t <> text (showNum n) varLinkName z = if isBasic user z then text (map toLower z) <> "_" else "*" <> text (map toLower z) <> "_" remType :: String -> [IVar] -> [IVar] remType _ [] = [] remType t ((t2,n):ts) = if t == t2 then remType t ts else (t2,n) : remType t ts {- **** Implementation (.C) File Functions **** -} --Makes the .C file mkCFile :: CF -> String mkCFile cf = unlines [ header, concatMap (prDataC user) (getAbstractSyntax cf), definedRules (Just $ LC nil cons) cf "/******************** Defined Constructors ********************/" ] where nil _ = (,dummyType) $ "NULL" cons t = (,dummyType) $ "new List" ++ identType t user = map fst (tokenPragmas cf) header = unlines [ "//C++ Abstract Syntax Implementation generated by the BNF Converter.", "#include ", "#include \"Absyn.H\"" ] --This is not represented in the implementation. prDataC :: [UserDef] -> Data -> String prDataC user (cat, rules) = concatMap (prRuleC user cat) rules --Classes for rules vary based on the type of rule. prRuleC :: [UserDef] -> Cat -> (String, [Cat]) -> String prRuleC user c (fun, cats) = if isNilFun fun || isOneFun fun then "" --these are not represented in the AbSyn else if isConsFun fun then --this is the linked list case. unlines [ "/******************** " ++ c' ++ " ********************/", render $ prConstructorC user c' vs cats, prCopyC user c' vs, prDestructorC user c' vs, prListFuncs user c', prAcceptC c', prCloneC user c' vs, "" ] else --a standard rule unlines [ "/******************** " ++ fun ++ " ********************/", render $ prConstructorC user fun vs cats, prCopyC user fun vs, prDestructorC user fun vs, prAcceptC fun, prCloneC user fun vs, "" ] where vs = getVars cats c' = identCat (normCat c) --These are all built-in list functions. --Later we could include things like lookup,insert,delete,etc. prListFuncs :: [UserDef] -> String -> String prListFuncs user c = unlines [ c ++ "::" ++ c ++ "(" ++ m +++ mstar ++ "p)", "{", " " ++ m' ++ " = p;", " " ++ v ++ "= 0;", "}", c ++ "*" +++ c ++ "::" ++ "reverse()", "{", " if (" ++ v +++ "== 0) return this;", " else", " {", " " ++ c ++ " *tmp =" +++ v ++ "->reverse(this);", " " ++ v +++ "= 0;", " return tmp;", " }", "}", "", c ++ "*" +++ c ++ "::" ++ "reverse(" ++ c ++ "* prev)", "{", " if (" ++ v +++ "== 0)", " {", " " ++ v +++ "= prev;", " return this;", " }", " else", " {", " " ++ c +++ "*tmp =" +++ v ++ "->reverse(this);", " " ++ v +++ "= prev;", " return tmp;", " }", "}" ] where v = map toLower c ++ "_" m = drop 4 c mstar = if isBasic user m then "" else "*" m' = drop 4 v --The standard accept function for the Visitor pattern prAcceptC :: String -> String prAcceptC ty = "\nvoid " ++ ty ++ "::accept(Visitor *v) { v->visit" ++ ty ++ "(this); }" -- | The constructor just assigns the parameters to the corresponding instance -- variables. -- >>> prConstructorC ["Integer"] "bla" [("A",1), ("Integer",1), ("A",2)] [Cat "A", Cat "Integer", Cat "A"] -- bla::bla(A *p1, Integer p2, A *p3) { a_1 = p1; integer_ = p2; a_2 = p3; } prConstructorC :: [UserDef] -> String -> [IVar] -> [Cat] -> Doc prConstructorC user c vs cats = text c <> "::" <> text c <> parens args <+> "{" <+> text (prAssigns vs params) <> "}" where (types, params) = unzip (prParams cats (length cats) (length cats+1)) args = hsep $ punctuate "," $ zipWith prArg types params prArg type_ name | isBasic user type_ = text type_ <+> text name | otherwise = text type_ <+> "*" <> text name --Print copy constructor and copy assignment prCopyC :: [UserDef] -> String -> [IVar] -> String prCopyC user c vs = c ++ "::" ++ c ++ "(const" +++ c +++ "& other) {" +++ concatMap doV vs ++++ "}" ++++ c +++ "&" ++ c ++ "::" ++ "operator=(const" +++ c +++ "& other) {" ++++ " " ++ c +++ "tmp(other);" ++++ " swap(tmp);" ++++ " return *this;" ++++ "}" ++++ "void" +++ c ++ "::swap(" ++ c +++ "& other) {" ++++ concatMap swapV vs ++++ "}\n" where doV :: IVar -> String doV v@(t, _) | isBasic user t = " " ++ vn v ++ " = other." ++ vn v ++ ";\n" | otherwise = " " ++ vn v ++ " = other." ++ vn v ++ "->clone();\n" vn :: IVar -> String vn (t, 0) = varName t vn (t, n) = varName t ++ show n swapV :: IVar -> String swapV v = " std::swap(" ++ vn v ++ ", other." ++ vn v ++ ");\n" --The cloner makes a new deep copy of the object prCloneC :: [UserDef] -> String -> [IVar] -> String prCloneC _ c _ = c +++ "*" ++ c ++ "::clone() const {" ++++ " return new" +++ c ++ "(*this);\n}" --The destructor deletes all a class's members. prDestructorC :: [UserDef] -> String -> [IVar] -> String prDestructorC user c vs = c ++ "::~" ++ c ++"()" +++ "{" +++ concatMap prDeletes vs ++ "}" where prDeletes :: (String, Int) -> String prDeletes (t, n) | isBasic user t = "" | n == 0 = "delete(" ++ varName t ++ "); " | otherwise = "delete(" ++ varName t ++ show n ++ "); " --Prints the constructor's parameters. prParams :: [Cat] -> Int -> Int -> [(String,String)] prParams [] _ _ = [] prParams (c:cs) n m = (identCat c, "p" ++ show (m-n)) : prParams cs (n-1) m --Prints the assignments of parameters to instance variables. --This algorithm peeks ahead in the list so we don't use map or fold prAssigns :: [IVar] -> [String] -> String prAssigns [] _ = [] prAssigns _ [] = [] prAssigns ((t,n):vs) (p:ps) = if n == 1 then case findIndices (\(l,_) -> l == t) vs of [] -> varName t +++ "=" +++ p ++ ";" +++ prAssigns vs ps _ -> varName t ++ showNum n +++ "=" +++ p ++ ";" +++ prAssigns vs ps else varName t ++ showNum n +++ "=" +++ p ++ ";" +++ prAssigns vs ps {- **** Helper Functions **** -} -- | Checks if something is a basic or user-defined type. isBasic :: [UserDef] -> String -> Bool isBasic user x = x `elem` user || x `elem` specialCatsP BNFC-2.9.5/src/BNFC/Backend/CPP/PrettyPrinter.hs0000644000000000000000000005431007346545000017072 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {- ************************************************************** BNF Converter Module Description : This module generates the C++ Pretty Printer. It also generates the "show" method for printing an abstract syntax tree. The generated files use the Visitor design pattern. Author : Michael Pellauer Created : 10 August, 2003 Modified : 3 September, 2003 * Added resizable buffers ************************************************************** -} module BNFC.Backend.CPP.PrettyPrinter (cf2CPPPrinter, prRender) where import Prelude hiding ((<>)) import Data.Bifunctor (second) import Data.Char (toLower) import BNFC.CF import BNFC.Utils import BNFC.Backend.Common import BNFC.Backend.Common.NamedVariables import BNFC.Backend.Common.StrUtils (renderCharOrString) import BNFC.Backend.CPP.STL.STLUtils import BNFC.PrettyPrint --Produces (.H file, .C file) cf2CPPPrinter :: Bool -> Maybe String -> CF -> (String, String) cf2CPPPrinter useStl inPackage cf = (mkHFile useStl inPackage cf groups, mkCFile useStl inPackage cf groups) where groups = when useStl (positionRules cf) -- CPP/NoSTL treats position tokens as just tokens ++ fixCoercions (ruleGroupsInternals cf) positionRules :: CF -> [(Cat,[Rule])] positionRules cf = [ (TokenCat cat, [ Rule (noPosition cat) (noPosition $ TokenCat cat) (map (Left . TokenCat) [catString, catInteger]) Parsable ]) | cat <- filter (isPositionCat cf) $ map fst (tokenPragmas cf) ] {- **** Header (.H) File Methods **** -} --An extremely large function to make the Header File mkHFile :: Bool -> Maybe String -> CF -> [(Cat,[Rule])] -> String mkHFile useStl inPackage cf groups = unlines [ printHeader , content , classFooter , showHeader , content , classFooter , footer ] where printHeader = unlines [ "#ifndef " ++ hdef, "#define " ++ hdef, "", "#include \"Absyn.H\"", "#include ", "#include ", "#include ", "#include ", "", nsStart inPackage, "/* Certain applications may improve performance by changing the buffer size */", "#define " ++ nsDefine inPackage "BUFFER_INITIAL" ++ " 2000", "/* You may wish to change _L_PAREN or _R_PAREN */", "#define " ++ nsDefine inPackage "_L_PAREN" ++ " '('", "#define " ++ nsDefine inPackage "_R_PAREN" ++ " ')'", "", "class PrintAbsyn : public Visitor", "{", " protected:", " int _n_, _i_;", " /* The following are simple heuristics for rendering terminals */", " /* You may wish to change them */", " void render(Char c);", if useStl then " void render(String s);" else "", " void render(const char *s);", " void indent(void);", " void backup(void);", " void onEmptyLine(void);", " void removeTrailingSpaces(void);", " void removeTrailingWhitespace(void);", " public:", " PrintAbsyn(void);", " ~PrintAbsyn(void);", " char *print(Visitable *v);" ] hdef = nsDefine inPackage "PRINTER_HEADER" content = concatMap (prDataH useStl) groups classFooter = unlines $ [ " void visitInteger(Integer i);", " void visitDouble(Double d);", " void visitChar(Char c);", " void visitString(String s);", " void visitIdent(String s);" ] ++ [" void visit" ++ t ++ "(String s);" | t <- tokenNames cf] ++ [ " protected:", " char *buf_;", " size_t cur_, buf_size;", "", " void inline bufEscape(const char c)", " {", " switch(c)", " {", " case '\\f': bufAppend(\"\\\\f\" ); break;", " case '\\n': bufAppend(\"\\\\n\" ); break;", " case '\\r': bufAppend(\"\\\\r\" ); break;", " case '\\t': bufAppend(\"\\\\t\" ); break;", " case '\\v': bufAppend(\"\\\\v\" ); break;", " case '\\\\': bufAppend(\"\\\\\\\\\"); break;", " case '\\'': bufAppend(\"\\\\'\" ); break;", " case '\\\"': bufAppend(\"\\\\\\\"\"); break;", " default: bufAppend(c);", " }", " }", "", " void inline bufEscape(const char *s)", " {", " if (s) while (*s) bufEscape(*s++);", " }", "", if useStl then render (nest 2 $ bufAppendString "bufEscape") else "", " void inline bufAppend(const char *s)", " {", " size_t end = cur_ + strlen(s);", " if (end >= buf_size) {", " do buf_size *= 2; /* Double the buffer size */", " while (end >= buf_size);", " resizeBuffer();", " }", " strcpy(&buf_[cur_], s);", " cur_ = end;", " }", "", " void inline bufAppend(const char c)", " {", " if (cur_ + 1 >= buf_size)", " {", " buf_size *= 2; /* Double the buffer size */", " resizeBuffer();", " }", " buf_[cur_] = c;", " buf_[++cur_] = 0;", " }", "", if useStl then render (nest 2 $ bufAppendString "bufAppend") else "", " void inline bufReset(void)", " {", " if (buf_) delete[] buf_;", " buf_size = " ++ nsDefine inPackage "BUFFER_INITIAL" ++ ";", " buf_ = new char[buf_size];", " memset(buf_, 0, buf_size);", " cur_ = 0;", " }", "", " void inline resizeBuffer(void)", " {", " char *temp = new char[buf_size];", " if (buf_)", " {", " strcpy(temp, buf_);", " delete[] buf_;", " }", " buf_ = temp;", " }", "};", "" ] bufAppendString :: Doc -> Doc bufAppendString f = "void inline" <+> f <> "(String str)" $$ codeblock 2 [ "const char *s = str.c_str();" , f <> "(s);" ] showHeader = unlines [ "", "class ShowAbsyn : public Visitor", "{", " public:", " ShowAbsyn(void);", " ~ShowAbsyn(void);", " char *show(Visitable *v);" ] footer = unlines [ nsEnd inPackage, "", "#endif" ] --Prints all the required method names and their parameters. prDataH :: Bool -> (Cat, [Rule]) -> String prDataH useSTL (cat, rules) | isList cat = unlines $ concat [ [ concat [ " void visit", cl, "(", cl, " *p);" ] ] , when useSTL [ concat [ " void iter", cl, "(", itty, " i, ", itty, " j);" ] ] ] | otherwise = abstract ++ concatMap prRuleH rules where cl = identCat (normCat cat) itty = concat [ cl, "::", "const_iterator" ] abstract = case lookupRule (noPosition $ catToStr cat) rules of Just _ -> "" Nothing -> " void visit" ++ cl ++ "(" ++ cl ++ " *p); /* abstract class */\n" --Prints all the methods to visit a rule. prRuleH :: IsFun f => Rul f -> String prRuleH (Rule fun _ _ _) | isProperLabel fun = concat [" void visit", funName fun, "(", funName fun, " *p);\n"] prRuleH _ = "" {- **** Implementation (.C) File Methods **** -} --This makes the .C file by a similar method. mkCFile :: Bool -> Maybe String -> CF -> [(Cat,[Rule])] -> String mkCFile useStl inPackage cf groups = concat [ header, nsStart inPackage ++ "\n", prRender useStl, printEntries, concatMap (prPrintData useStl inPackage cf) groups, printBasics, printTokens, showEntries, concatMap (prShowData useStl) groups, showBasics, showTokens, nsEnd inPackage ++ "\n" ] where header = unlines [ "/*** Pretty Printer and Abstract Syntax Viewer ***/", "", "#include ", "#include \"Printer.H\"", "#define INDENT_WIDTH 2", "" ] printEntries = unlines [ "PrintAbsyn::PrintAbsyn(void)", "{", " _i_ = 0; _n_ = 0;", " buf_ = 0;", " bufReset();", "}", "", "PrintAbsyn::~PrintAbsyn(void)", "{", "}", "", "char *PrintAbsyn::print(Visitable *v)", "{", " _i_ = 0; _n_ = 0;", " bufReset();", " v->accept(this);", " return buf_;", "}", "" ] showEntries = unlines [ "ShowAbsyn::ShowAbsyn(void)", "{", " buf_ = 0;", " bufReset();", "}", "", "ShowAbsyn::~ShowAbsyn(void)", "{", "}", "", "char *ShowAbsyn::show(Visitable *v)", "{", " bufReset();", " v->accept(this);", " return buf_;", "}", "" ] printBasics = unlines [ "void PrintAbsyn::visitInteger(Integer i)", "{", " char tmp[20];", " sprintf(tmp, \"%d\", i);", " render(tmp);", "}", "", "void PrintAbsyn::visitDouble(Double d)", "{", " char tmp[24];", " sprintf(tmp, \"%.15g\", d);", " render(tmp);", "}", "", "void PrintAbsyn::visitChar(Char c)", "{", " bufAppend('\\'');", " bufEscape(c);", " bufAppend('\\'');", " bufAppend(' ');", "}", "", "void PrintAbsyn::visitString(String s)", "{", " bufAppend('\\\"');", " bufEscape(s);", " bufAppend('\\\"');", " bufAppend(' ');", "}", "", "void PrintAbsyn::visitIdent(String s)", "{", " render(s);", "}", "" ] printTokens = unlines [unlines [ "void PrintAbsyn::visit" ++ t ++ "(String s)", "{", " render(s);", "}", "" ] | t <- tokenNames cf ] showBasics = unlines [ "void ShowAbsyn::visitInteger(Integer i)", "{", " char tmp[20];", " sprintf(tmp, \"%d\", i);", " bufAppend(tmp);", "}", "void ShowAbsyn::visitDouble(Double d)", "{", " char tmp[24];", " sprintf(tmp, \"%.15g\", d);", " bufAppend(tmp);", "}", "void ShowAbsyn::visitChar(Char c)", "{", " bufAppend('\\'');", " bufEscape(c);", " bufAppend('\\'');", "}", "void ShowAbsyn::visitString(String s)", "{", " bufAppend('\\\"');", " bufEscape(s);", " bufAppend('\\\"');", "}", "void ShowAbsyn::visitIdent(String s)", "{", " bufAppend('\\\"');", " bufAppend(s);", " bufAppend('\\\"');", "}", "" ] showTokens = unlines [unlines [ "void ShowAbsyn::visit" ++ t ++ "(String s)", "{", " bufAppend('\\\"');", " bufAppend(s);", " bufAppend('\\\"');", "}", "" ] | t <- tokenNames cf ] {- **** Pretty Printer Methods **** -} -- | Generates methods for the Pretty Printer. prPrintData :: Bool -> Maybe String -> CF -> (Cat, [Rule]) -> String prPrintData True {- use STL -} _ _ (cat@(ListCat _), rules) = render $ genPrintVisitorList (cat, rules) prPrintData False {- use STL -} _ _ (cat@(ListCat _), rules) = genPrintVisitorListNoStl (cat, rules) -- Not a list : prPrintData _ _inPackage cf (TokenCat cat, _rules) | isPositionCat cf cat = unlines $ -- a position token [ "void PrintAbsyn::visit" ++ cat ++ "(" ++ cat ++ " *p)" , "{" , " visitIdent(p->string_);" , "}" , "" ] prPrintData _ inPackage _cf (cat, rules) = -- Not a list abstract ++ concatMap (prPrintRule inPackage) rules where cl = identCat (normCat cat) abstract = case lookupRule (noPosition $ catToStr cat) rules of Just _ -> "" Nothing -> "void PrintAbsyn::visit" ++ cl ++ "(" ++ cl +++ "*p) {} //abstract class\n\n" -- | Generate pretty printer visitor for a list category (STL version). -- genPrintVisitorList :: (Cat, [Rule]) -> Doc genPrintVisitorList (cat@(ListCat _), rules) = vcat [ "void PrintAbsyn::visit" <> lty <> parens (lty <+> "*" <> vname) , codeblock 2 [ "iter" <> lty <> parens (vname <> "->begin()" <> comma <+> vname <> "->end()") <> semi ] , "" , "void PrintAbsyn::iter" <> lty <> parens (itty <+> "i" <> comma <+> itty <+> "j") , codeblock 2 $ concat [ if null docs0 then [ "if (i == j) return;" ] else [ "if (i == j)" , "{ /* nil */" , nest 2 $ vcat docs0 , "}" , "else" ] , unless (null docs1) [ "if (i == j-1)" , "{ /* last */" , nest 2 $ vcat docs1 , "}" , "else" ] , [ "{ /* cons */" , nest 2 $ vcat docs2 , "}" ] ] , "" , "" ] where cl = identCat (normCat cat) lty = text cl -- List type itty = lty <> "::const_iterator" -- Iterator type vname = text $ map toLower cl prules = sortRulesByPrecedence rules swRules f = switchByPrecedence "_i_" $ map (second $ sep . prListRule_) $ uniqOn fst $ filter f prules -- Discard duplicates, can only handle one rule per precedence. docs0 = swRules isNilFun docs1 = swRules isOneFun docs2 = swRules isConsFun genPrintVisitorList _ = error "genPrintVisitorList expects a ListCat" -- | Only render the rhs (items) of a list rule. prListRule_ :: IsFun a => Rul a -> [Doc] prListRule_ (Rule _ _ items _) = for items $ \case Right t -> "render(" <> text (snd (renderCharOrString t)) <> ");" Left c | Just{} <- maybeTokenCat c -> "visit" <> dat <> "(*i);" | isList c -> "iter" <> dat <> "(i+1, j);" | otherwise -> "(*i)->accept(this);" where dat = text $ identCat $ normCat c -- This is the only part of the pretty printer that differs significantly -- between the versions with and without STL. -- The present version has been adapted from CFtoCPrinter. genPrintVisitorListNoStl :: (Cat, [Rule]) -> String genPrintVisitorListNoStl (cat@(ListCat _), rules) = unlines $ concat [ [ "void PrintAbsyn::visit" ++ cl ++ "("++ cl ++ " *" ++ vname ++ ")" , "{" , " if (" ++ vname +++ "== 0)" , " { /* nil */" ] , unlessNull (swRules isNilFun) $ \ docs -> [ render $ nest 4 $ vcat docs ] , [ " }" ] , unlessNull (swRules isOneFun) $ \ docs -> [ " else if (" ++ pre ++ vname ++ "_ == 0)" , " { /* last */" , render $ nest 4 $ vcat docs , " }" ] , unlessNull (swRules isConsFun) $ \ docs -> [ " else" , " { /* cons */" , render $ nest 4 $ vcat docs , " }" ] , [ "}" , "" ] ] where cl = identCat (normCat cat) vname = map toLower cl pre = vname ++ "->" prules = sortRulesByPrecedence rules swRules f = switchByPrecedence "_i_" $ map (second $ sep . map text . prPrintRule_ pre) $ uniqOn fst $ filter f prules -- Discard duplicates, can only handle one rule per precedence. genPrintVisitorListNoStl _ = error "genPrintVisitorListNoStl expects a ListCat" --Pretty Printer methods for a rule. prPrintRule :: Maybe String -> Rule -> String prPrintRule inPackage r@(Rule fun _ _ _) | isProperLabel fun = unlines $ concat [ [ "void PrintAbsyn::visit" ++ funName fun ++ "(" ++ funName fun +++ "*" ++ fnm ++ ")" , "{" , " int oldi = _i_;" , parenCode "_L_PAREN" , "" ] , prPrintRule_ (fnm ++ "->") r , [ "" , parenCode "_R_PAREN" , " _i_ = oldi;" , "}" , "" ] ] where p = precRule r parenCode x = " if (oldi > " ++ show p ++ ") render(" ++ nsDefine inPackage x ++ ");" fnm = "p" --old names could cause conflicts prPrintRule _ _ = "" prPrintRule_ :: IsFun a => String -> Rul a -> [String] prPrintRule_ pre (Rule _ _ items _) = map (prPrintItem pre) $ numVars items --This goes on to recurse to the instance variables. prPrintItem :: String -> Either (Cat, Doc) String -> String prPrintItem _ (Right t) = " render(" ++ snd (renderCharOrString t) ++ ");" prPrintItem pre (Left (c, nt)) | Just t <- maybeTokenCat c = " visit" ++ t ++ "(" ++ pre ++ s ++ ");" | isList c = " " ++ setI (precCat c) ++ "visit" ++ elt ++ "(" ++ pre ++ s ++ ");" | otherwise = " " ++ setI (precCat c) ++ pre ++ s ++ "->accept(this);" where s = render nt elt = identCat $ normCat c {- **** Abstract Syntax Tree Printer **** -} --This prints the functions for Abstract Syntax tree printing. prShowData :: Bool -> (Cat, [Rule]) -> String prShowData True (cat@(ListCat c), _) = unlines [ "void ShowAbsyn::visit" ++ cl ++ "("++ cl ++ " *" ++ vname ++ ")", "{", " for ("++ cl ++"::const_iterator i = " ++ vname++"->begin() ; i != " ++vname ++"->end() ; ++i)", " {", if isTokenCat c then " visit" ++ baseName cl ++ "(*i) ;" else " (*i)->accept(this);", " if (i != " ++ vname ++ "->end() - 1) bufAppend(\", \");", " }", "}", "" ] where cl = identCat (normCat cat) vname = map toLower cl prShowData False (cat@(ListCat c), _) = unlines [ "void ShowAbsyn::visit" ++ cl ++ "("++ cl ++ " *" ++ vname ++ ")", "{", " while(" ++ vname ++ "!= 0)", " {", " if (" ++ vname ++ "->" ++ vname ++ "_)", " {", visitMember, " bufAppend(\", \");", " " ++ vname +++ "=" +++ vname ++ "->" ++ vname ++ "_;", " }", " else", " {", visitMember, " " ++ vname ++ " = 0;", " }", " }", "}", "" ] where cl = identCat (normCat cat) ecl = identCat (normCatOfList cat) vname = map toLower cl member = map toLower ecl ++ "_" visitMember | Just t <- maybeTokenCat c = " visit" ++ t ++ "(" ++ vname ++ "->" ++ member ++ ");" | otherwise = " " ++ vname ++ "->" ++ member ++ "->accept(this);" prShowData _ (cat, rules) = --Not a list: abstract ++ concatMap prShowRule rules where cl = identCat (normCat cat) abstract = case lookupRule (noPosition $ catToStr cat) rules of Just _ -> "" Nothing -> "void ShowAbsyn::visit" ++ cl ++ "(" ++ cl ++ " *p) {} //abstract class\n\n" --This prints all the methods for Abstract Syntax tree rules. prShowRule :: IsFun f => Rul f -> String prShowRule (Rule f _ cats _) | isProperLabel f = concat [ "void ShowAbsyn::visit" ++ fun ++ "(" ++ fun +++ "*" ++ fnm ++ ")\n", "{\n", lparen, " bufAppend(\"" ++ fun ++ "\");\n", optspace, cats', rparen, "}\n" ] where fun = funName f (optspace, lparen, rparen, cats') | null [ () | Left _ <- cats ] -- @all isRight cats@, but Data.Either.isRight requires base >= 4.7 = ("", "", "", "") | otherwise = (" bufAppend(' ');\n", " bufAppend('(');\n"," bufAppend(')');\n" , concat (insertSpaces (map (prShowCat fnm) (numVars cats)))) insertSpaces [] = [] insertSpaces (x:[]) = [x] insertSpaces (x:xs) = if x == "" then insertSpaces xs else x : " bufAppend(' ');\n" : insertSpaces xs fnm = "p" --other names could cause conflicts prShowRule _ = "" -- This recurses to the instance variables of a class. prShowCat :: String -> Either (Cat, Doc) String -> String prShowCat _ (Right _) = "" prShowCat fnm (Left (cat, nt)) | Just t <- maybeTokenCat cat = unlines [ " visit" ++ t ++ "(" ++ fnm ++ "->" ++ s ++ ");" ] | catToStr (normCat $ strToCat s) /= s = unlines [ accept ] | otherwise = unlines [ " bufAppend('[');" , " if (" ++ fnm ++ "->" ++ s ++ ")" ++ accept , " bufAppend(']');" ] where s = render nt accept = " " ++ fnm ++ "->" ++ s ++ "->accept(this);" {- **** Helper Functions Section **** -} -- from ListIdent to Ident baseName :: [a] -> [a] baseName = drop 4 --Just sets the coercion level for parentheses in the Pretty Printer. setI :: Integer -> String setI n = "_i_ = " ++ show n ++ "; " --An extremely simple renderer for terminals. prRender :: Bool -> String prRender useStl = unlines $ concat [ [ "//You may wish to change render", "void PrintAbsyn::render(Char c)", "{", " if (c == '{')", " {", " onEmptyLine();", " bufAppend(c);", " _n_ = _n_ + INDENT_WIDTH;", " bufAppend('\\n');", " indent();", " }", " else if (c == '(' || c == '[')", " bufAppend(c);", " else if (c == ')' || c == ']')", " {", " removeTrailingWhitespace();", " bufAppend(c);", " bufAppend(' ');", " }", " else if (c == '}')", " {", " _n_ = _n_ - INDENT_WIDTH;", " onEmptyLine();", " bufAppend(c);", " bufAppend('\\n\');", " indent();", " }", " else if (c == ',')", " {", " removeTrailingWhitespace();", " bufAppend(c);", " bufAppend(' ');", " }", " else if (c == ';')", " {", " removeTrailingWhitespace();", " bufAppend(c);", " bufAppend('\\n');", " indent();", " }", " else if (c == ' ') bufAppend(c);", " else if (c == 0) return;", " else", " {", " bufAppend(c);", " bufAppend(' ');", " }", "}", "" ] , when useStl [ render $ vcat [ "void PrintAbsyn::render(String s)" , codeblock 2 [ "render(s.c_str());" ] , "" ] ] , [ "bool allIsSpace(const char *s)" , "{" , " char c;" , " while ((c = *s++))" , " if (! isspace(c)) return false;" , " return true;" , "}" , "" ] , [ "void PrintAbsyn::render(const char *s)" , "{" , " if (*s) /* C string not empty */" , " {" , " if (allIsSpace(s)) {" , " backup();" , " bufAppend(s);" , " } else {" , " bufAppend(s);" , " bufAppend(' ');" , " }" , " }" , "}" , "" , "void PrintAbsyn::indent()" , "{" , " int n = _n_;" , " while (--n >= 0)" , " bufAppend(' ');" , "}" , "" , "void PrintAbsyn::backup()" , "{" , " if (cur_ && buf_[cur_ - 1] == ' ')" , " buf_[--cur_] = 0;" , "}" , "" , "void PrintAbsyn::removeTrailingSpaces()" , "{" , " while (cur_ && buf_[cur_ - 1] == ' ') --cur_;" , " buf_[cur_] = 0;" , "}" , "" , "void PrintAbsyn::removeTrailingWhitespace()" , "{" , " while (cur_ && (buf_[cur_ - 1] == ' ' || buf_[cur_ - 1] == '\\n')) --cur_;" , " buf_[cur_] = 0;" , "}" , "" , "void PrintAbsyn::onEmptyLine()" , "{" , " removeTrailingSpaces();" , " if (cur_ && buf_[cur_ - 1 ] != '\\n') bufAppend('\\n');" , " indent();" , "}" , "" ] ] BNFC-2.9.5/src/BNFC/Backend/CPP/STL.hs0000644000000000000000000001350407346545000014701 0ustar0000000000000000{- BNF Converter: C++ Main file Copyright (C) 2004 Author: Markus Forsberg, Michael Pellauer Copyright (C) 2020 Andreas Abel Modified from CPPTop to BNFC.Backend.CPP.STL 2006 by Aarne Ranta. -} module BNFC.Backend.CPP.STL (makeCppStl,) where import Data.Foldable (toList) import BNFC.Utils import BNFC.CF import BNFC.Options import BNFC.Backend.Base import BNFC.Backend.C ( bufferH, bufferC, comment, testfileHeader ) import BNFC.Backend.C.CFtoBisonC ( cf2Bison ) import BNFC.Backend.C.CFtoFlexC ( cf2flex, ParserMode(..) ) import BNFC.Backend.CPP.Common ( commentWithEmacsModeHint ) import BNFC.Backend.CPP.Makefile import BNFC.Backend.CPP.STL.CFtoSTLAbs import BNFC.Backend.CPP.STL.CFtoCVisitSkelSTL import BNFC.Backend.CPP.PrettyPrinter import BNFC.Backend.CPP.STL.STLUtils import qualified BNFC.Backend.Common.Makefile as Makefile makeCppStl :: SharedOptions -> CF -> MkFiles () makeCppStl opts cf = do let (hfile, cfile) = cf2CPPAbs (linenumbers opts) (inPackage opts) name cf mkCppFile "Absyn.H" hfile mkCppFile "Absyn.C" cfile mkCppFile "Buffer.H" bufferH mkCppFile "Buffer.C" $ bufferC "Buffer.H" let (flex, env) = cf2flex parserMode cf mkCppFileWithHint (name ++ ".l") flex mkCppFileWithHint (name ++ ".y") $ cf2Bison (linenumbers opts) parserMode cf env mkCppFile "Parser.H" $ mkHeaderFile (inPackage opts) (toList $ allEntryPoints cf) mkCppFile "ParserError.H" $ printParseErrHeader (inPackage opts) let (skelH, skelC) = cf2CVisitSkel True (inPackage opts) cf mkCppFile "Skeleton.H" skelH mkCppFile "Skeleton.C" skelC let (prinH, prinC) = cf2CPPPrinter True (inPackage opts) cf mkCppFile "Printer.H" prinH mkCppFile "Printer.C" prinC mkCppFile "Test.C" (cpptest (inPackage opts) cf) Makefile.mkMakefile (optMake opts) $ makefile prefix name where name :: String name = lang opts -- The prefix is a string used by flex and bison -- that is prepended to generated function names. -- It should be a valid C identifier. prefix :: String prefix = snakeCase_ name ++ "_" parserMode :: ParserMode parserMode = CppParser (inPackage opts) prefix mkCppFile x = mkfile x comment mkCppFileWithHint x = mkfile x commentWithEmacsModeHint printParseErrHeader :: Maybe String -> String printParseErrHeader inPackage = unlines [ " #pragma once " , " #include " , " #include " , "" , nsStart inPackage , " class parse_error : public std::runtime_error" , " {" , " public:" , " parse_error(int line, std::string str)" , " : std::runtime_error(str)" , " , m_line(line) {}" , " int getLine() {" , " return m_line;" , " } " , " private:" , " int m_line;" , " }; " , nsEnd inPackage ] cpptest :: Maybe String -> CF -> String cpptest inPackage cf = unlines $ concat [ testfileHeader , [ "", "#include ", "#include ", "#include ", "#include \"Parser.H\"", "#include \"Printer.H\"", "#include \"Absyn.H\"", "#include \"ParserError.H\"", "", "void usage() {", " printf(\"usage: Call with one of the following argument " ++ "combinations:\\n\");", " printf(\"\\t--help\\t\\tDisplay this help message.\\n\");", " printf(\"\\t(no arguments)\\tParse stdin verbosely.\\n\");", " printf(\"\\t(files)\\t\\tParse content of files verbosely.\\n\");", " printf(\"\\t-s (files)\\tSilent mode. Parse content of files " ++ "silently.\\n\");", "}", "", "int main(int argc, char ** argv)", "{", " FILE *input;", " int quiet = 0;", " char *filename = NULL;", "", " if (argc > 1) {", " if (strcmp(argv[1], \"-s\") == 0) {", " quiet = 1;", " if (argc > 2) {", " filename = argv[2];", " } else {", " input = stdin;", " }", " } else {", " filename = argv[1];", " }", " }", "", " if (filename) {", " input = fopen(filename, \"r\");", " if (!input) {", " usage();", " exit(1);", " }", " } else input = stdin;", " /* The default entry point is used. For other options see Parser.H */", " " ++ scope ++ dat ++ " *parse_tree = NULL;", " try { ", " parse_tree = " ++ scope ++ "p" ++ def ++ "(input);", " } catch( " ++ scope ++ "parse_error &e) {", " std::cerr << \"Parse error on line \" << e.getLine() << \"\\n\"; ", " }", " if (parse_tree)", " {", " printf(\"\\nParse Successful!\\n\");", " if (!quiet) {", " printf(\"\\n[Abstract Syntax]\\n\");", " " ++ scope ++ "ShowAbsyn *s = new " ++ scope ++ "ShowAbsyn();", " printf(\"%s\\n\\n\", s->show(parse_tree));", " printf(\"[Linearized Tree]\\n\");", " " ++ scope ++ "PrintAbsyn *p = new " ++ scope ++ "PrintAbsyn();", " printf(\"%s\\n\\n\", p->print(parse_tree));", " }", " delete(parse_tree);", " return 0;", " }", " return 1;", "}", "" ] ] where cat = firstEntry cf dat = identCat $ normCat cat def = identCat cat scope = nsScope inPackage mkHeaderFile :: Maybe String -> [Cat] -> String mkHeaderFile inPackage eps = unlines $ concat [ [ "#ifndef " ++ hdef , "#define " ++ hdef , "" , "#include" , "#include" , "#include" , "#include \"Absyn.H\"" , "" , nsStart inPackage ] , concatMap mkFuncs eps , [ nsEnd inPackage , "" , "#endif" ] ] where hdef = nsDefine inPackage "PARSER_HEADER_FILE" mkFuncs s = [ identCat (normCat s) ++ "*" +++ "p" ++ identCat s ++ "(FILE *inp);" , identCat (normCat s) ++ "*" +++ "p" ++ identCat s ++ "(const char *str);" ] BNFC-2.9.5/src/BNFC/Backend/CPP/STL/0000755000000000000000000000000007346545000014342 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/CPP/STL/CFtoCVisitSkelSTL.hs0000644000000000000000000001052207346545000020055 0ustar0000000000000000{- BNF Converter: C++ Skeleton generation Copyright (C) 2004 Author: Michael Pellauer Description : This module generates the C++ Skeleton functions. The generated files use the Visitor design pattern. Author : Michael Pellauer Created : 9 August, 2003 Modified : 29 August, 2006 Aarne Ranta -} module BNFC.Backend.CPP.STL.CFtoCVisitSkelSTL (cf2CVisitSkel) where import Data.Char import BNFC.CF import BNFC.Utils ((+++), unless) import BNFC.Backend.Common.OOAbstract import BNFC.Backend.CPP.Naming import BNFC.Backend.CPP.STL.STLUtils --Produces (.H file, .C file) cf2CVisitSkel :: Bool -> Maybe String -> CF -> (String, String) cf2CVisitSkel useSTL inPackage cf = ( mkHFile useSTL inPackage cab , mkCFile useSTL inPackage cab ) where cab = cf2cabs cf -- **** Header (.H) File Functions **** --Generates the Header File mkHFile :: Bool -> Maybe String -> CAbs -> String mkHFile useSTL inPackage cf = unlines [ "#ifndef " ++ hdef, "#define " ++ hdef, "/* You might want to change the above name. */", "", "#include \"Absyn.H\"", "", nsStart inPackage, "class Skeleton : public Visitor", "{", "public:", unlines [" void visit" ++ b ++ "(" ++ b ++ " *p);" | b <- classes, notElem b (defineds cf), useSTL || notElem b (postokens cf) ], unlines [" void visit" ++ b ++ "(" ++ b ++ " x);" | b <- basics useSTL cf ], "};", nsEnd inPackage, "", "#endif" ] where hdef = nsDefine inPackage "SKELETON_HEADER" classes = allClasses cf -- CPP/NoSTL treats 'position token' as just 'token'. basics :: Bool -> CAbs -> [String] basics useSTL cf = concat [ map fst basetypes , tokentypes cf , unless useSTL $ postokens cf ] -- **** Implementation (.C) File Functions **** --Makes the .C File mkCFile :: Bool -> Maybe String -> CAbs -> String mkCFile useSTL inPackage cf = unlines [ headerC, nsStart inPackage, unlines [ "void Skeleton::visit" ++ t ++ "(" ++ t ++ " *t) {} //abstract class" | t <- absclasses cf], unlines [ prCon r | (_,rs) <- signatures cf, r <- rs, useSTL || not (posRule r) ], unlines [ prList useSTL cb | cb <- listtypes cf ], unlines [ prBasic b | b <- base ], nsEnd inPackage ] where -- See OOAbstract 'posdata': posRule (c, _) = c `elem` postokens cf base = basics useSTL cf prCon (f,cs) = unlines [ "void Skeleton::visit" ++ f ++ "(" ++ f ++ " *" ++ v ++ ")", "{", " /* Code For " ++ f ++ " Goes Here */", "", unlines [" " ++ visitArg c | c <- cs], "}" ] where v = mkVariable f visitArg (cat,isPt,var) | isPt && (useSTL || cat `notElem` base) = "if (" ++ field ++ ") " ++ field ++ "->accept(this);" | otherwise = "visit" ++ cat ++ "(" ++ field ++ ");" where field = v ++ "->" ++ var headerC :: String headerC = unlines [ "/*** Visitor Design Pattern Skeleton. ***/", "/* This implements the common visitor design pattern.", " Note that this method uses Visitor-traversal of lists, so", " List->accept() does NOT traverse the list. This allows different", " algorithms to use context information differently. */", "", "#include \"Skeleton.H\"", "" ] prBasic :: String -> String prBasic c = unlines [ "void Skeleton::visit" ++ c ++ "(" ++ c ++ " x)", "{", " /* Code for " ++ c ++ " Goes Here */", "}" ] prList :: Bool -> (String, Bool) -> String prList True (cl,b) = unlines [ "void Skeleton::visit" ++ cl ++ "("++ cl +++ "*" ++ vname ++ ")", "{", " for ("++ cl ++"::iterator i = " ++ vname++"->begin() ; i != " ++vname ++"->end() ; ++i)", " {", if b then " (*i)->accept(this);" else " visit" ++ drop 4 cl ++ "(*i) ;", " }", "}" ] where vname = mkVariable cl prList False (cl,b) = unlines [ "void Skeleton::visit" ++ cl ++ "("++ cl +++ "*" ++ vname ++ ")" , "{" , " while (" ++ vname ++ ")" , " {" , " /* Code For " ++ cl ++ " Goes Here */" , if b then " if (" ++ field ++ ") " ++ field ++ "->accept(this);" else " visit" ++ ecl ++ "(" ++ field ++ ");" , " " ++ vname ++ " = " ++ vname ++ "->" ++ next ++ "_;" , " }" , "}" ] where ecl = drop 4 cl -- drop "List" vname = mkVariable cl next = map toLower cl member = map toLower ecl ++ "_" field = vname ++ "->" ++ member BNFC-2.9.5/src/BNFC/Backend/CPP/STL/CFtoSTLAbs.hs0000644000000000000000000001670207346545000016550 0ustar0000000000000000{-# LANGUAGE TupleSections #-} {- BNF Converter: C++ abstract syntax generator Copyright (C) 2004 Author: Michael Pellauer Description : This module generates the C++ Abstract Syntax tree classes. It generates both a Header file and an Implementation file, and uses the Visitor design pattern. It uses STL (Standard Template Library). Author : Michael Pellauer Created : 4 August, 2003 Modified : 22 May, 2004 / Antti-Juhani Kaijanaho 29 August, 2006 / Aarne Ranta -} module BNFC.Backend.CPP.STL.CFtoSTLAbs (cf2CPPAbs) where import Data.List ( intercalate, intersperse ) import BNFC.Backend.Common.OOAbstract import BNFC.CF import BNFC.Options ( RecordPositions(..) ) import BNFC.TypeChecker ( ListConstructors(..) ) import BNFC.Utils ( (+++), applyWhen ) import BNFC.Backend.CPP.Common import BNFC.Backend.CPP.STL.STLUtils --The result is two files (.H file, .C file) cf2CPPAbs :: RecordPositions -> Maybe String -> String -> CF -> (String, String) cf2CPPAbs rp inPackage _ cf = (mkHFile rp inPackage cab cf, mkCFile inPackage cab cf) where cab = cf2cabs cf -- **** Header (.H) File Functions **** -- --Makes the Header file. mkHFile :: RecordPositions -> Maybe String -> CAbs -> CF -> String mkHFile rp inPackage cabs cf = unlines [ "#ifndef " ++ hdef, "#define " ++ hdef, "", "#include", "#include", "", "//C++ Abstract Syntax Interface.", nsStart inPackage, "/******************** TypeDef Section ********************/", "", unlines ["typedef " ++ d ++ " " ++ c ++ ";" | (c,d) <- basetypes], "", unlines ["typedef std::string " ++ s ++ ";" | s <- tokentypes cabs], "", "/******************** Forward Declarations ********************/", "", unlines ["class " ++ c ++ ";" | c <- classes, notElem c (defineds cabs)], "", "/******************** Visitor Interfaces ********************/", prVisitor cabs, "", prVisitable, "", "/******************** Abstract Syntax Classes ********************/", "", unlines [prAbs rp c | c <- absclasses cabs], "", unlines [prCon (c,r) | (c,rs) <- signatures cabs, r <- rs], "", unlines [prList c | c <- listtypes cabs], "", definedRules Nothing cf "/******************** Defined Constructors ********************/", nsEnd inPackage, "#endif" ] where classes = allClasses cabs hdef = nsDefine inPackage "ABSYN_HEADER" -- auxiliaries prVisitable :: String prVisitable = unlines [ "class Visitable", "{", " public:", -- all classes with virtual methods require a virtual destructor " virtual ~Visitable() {}", " virtual void accept(Visitor *v) = 0;", "};" ] prVisitor :: CAbs -> String prVisitor cf = unlines [ "class Visitor", "{", "public:", " virtual ~Visitor() {}", unlines [" virtual void visit"++c++"("++c++" *p) = 0;" | c <- allClasses cf, notElem c (defineds cf)], "", unlines [" virtual void visit"++c++"(" ++c++" x) = 0;" | c <- allNonClasses cf], "};" ] prAbs :: RecordPositions -> String -> String prAbs rp c = unlines [ "class " ++ c ++ " : public Visitable", "{", "public:", " virtual " ++ c ++ " *clone() const = 0;", if rp == RecordPositions then " int line_number, char_number;" else "", "};" ] prCon :: (String, CAbsRule) -> String prCon (c,(f,cs)) = unlines [ "class " ++f++ " : public " ++ c, "{", "public:", unlines [" "++ typ +++ pointerIf st var ++ ";" | (typ,st,var) <- cs], " " ++ f ++ "(const " ++ f ++ " &);", " " ++ f ++ " &operator=(const " ++f++ " &);", " " ++ f ++ "(" ++ conargs ++ ");", -- Typ *p1, PIdent *p2, ListStm *p3); " ~" ++f ++ "();", " virtual void accept(Visitor *v);", " virtual " ++f++ " *clone() const;", " void swap(" ++f++ " &);", "};" ] where conargs = concat $ intersperse ", " [x +++ pointerIf st ("p" ++ show i) | ((x,st,_),i) <- zip cs [1::Int ..]] prList :: (String, Bool) -> String prList (c, b) = unlines [ "class " ++c++ " : public Visitable, public std::vector<" ++bas++ ">" , "{" , "public:" , " virtual void accept(Visitor *v);" , " virtual " ++ c ++ " *clone() const;" , "};" , "" -- cons for this list type , concat [ c, "* ", "cons", c, "(", bas, " x, ", c, "* xs);" ] ] where bas = applyWhen b (++ "*") $ drop 4 c {- drop "List" -} -- **** Implementation (.C) File Functions **** -- mkCFile :: Maybe String -> CAbs -> CF -> String mkCFile inPackage cabs cf = unlines $ [ "//C++ Abstract Syntax Implementation.", "#include ", "#include ", "#include ", "#include \"Absyn.H\"", nsStart inPackage, unlines [prConC r | (_,rs) <- signatures cabs, r <- rs], unlines [prListC l | l <- listtypes cabs], definedRules (Just $ LC nil cons) cf "/******************** Defined Constructors ********************/", nsEnd inPackage ] where nil t = (,dummyType) $ concat [ "new List", identType t, "()" ] cons t = (,dummyType) $ concat [ "consList", identType t ] prConC :: CAbsRule -> String prConC fcs@(f,_) = unlines [ "/******************** " ++ f ++ " ********************/", prConstructorC fcs, prCopyC fcs, prDestructorC fcs, prAcceptC f, prCloneC f, "" ] prListC :: (String,Bool) -> String prListC (c,b) = unlines [ "/******************** " ++ c ++ " ********************/" , "" , prAcceptC c , prCloneC c , prConsC c b ] --The standard accept function for the Visitor pattern prAcceptC :: String -> String prAcceptC ty = unlines [ "void " ++ ty ++ "::accept(Visitor *v)", "{", " v->visit" ++ ty ++ "(this);", "}" ] --The cloner makes a new deep copy of the object prCloneC :: String -> String prCloneC c = unlines [ c +++ "*" ++ c ++ "::clone() const", "{", " return new" +++ c ++ "(*this);", "}" ] -- | Make a list constructor definition. prConsC :: String -> Bool -> String prConsC c b = unlines [ concat [ c, "* ", "cons", c, "(", bas, " x, ", c, "* xs) {" ] , " xs->insert(xs->begin(), x);" , " return xs;" , "}" ] where bas = applyWhen b (++ "*") $ drop 4 c {- drop "List" -} --The constructor assigns the parameters to the corresponding instance variables. prConstructorC :: CAbsRule -> String prConstructorC (f,cs) = unlines [ f ++ "::" ++ f ++ "(" ++ conargs ++ ")", "{", unlines [" " ++ c ++ " = " ++ p ++ ";" | (c,p) <- zip cvs pvs], "}" ] where cvs = [c | (_,_,c) <- cs] pvs = ['p' : show i | ((_,_,_),i) <- zip cs [1::Int ..]] conargs = intercalate ", " [x +++ pointerIf st v | ((x,st,_),v) <- zip cs pvs] --Copy constructor and copy assignment prCopyC :: CAbsRule -> String prCopyC (c,cs) = unlines [ c ++ "::" ++ c ++ "(const" +++ c +++ "& other)", "{", unlines [" " ++ cv ++ " = other." ++ cloneIf st cv ++ ";" | (_,st,cv) <- cs], "}", "", c +++ "&" ++ c ++ "::" ++ "operator=(const" +++ c +++ "& other)", "{", " " ++ c +++ "tmp(other);", " swap(tmp);", " return *this;", "}", "", "void" +++ c ++ "::swap(" ++ c +++ "& other)", "{", unlines [" std::swap(" ++ cv ++ ", other." ++ cv ++ ");" | (_,_,cv) <- cs], "}" ] where cloneIf st cv = if st then (cv ++ "->clone()") else cv --The destructor deletes all a class's members. prDestructorC :: CAbsRule -> String prDestructorC (c,cs) = unlines [ c ++ "::~" ++ c ++"()", "{", unlines [" delete(" ++ cv ++ ");" | (_,isPointer,cv) <- cs, isPointer], "}" ] BNFC-2.9.5/src/BNFC/Backend/CPP/STL/STLUtils.hs0000644000000000000000000000111407346545000016356 0ustar0000000000000000{- BNF Converter: C++ common functions Copyright (C) 2008 Author: Martin Ejdestig -} module BNFC.Backend.CPP.STL.STLUtils where import Data.Char import Data.Maybe (fromMaybe) nsDefine :: Maybe String -> String -> String nsDefine inPackage h = maybe h (\ns -> map toUpper ns ++ "_" ++ h) inPackage nsStart :: Maybe String -> String nsStart = maybe "" (\ns -> "namespace " ++ ns ++ "\n{") nsEnd :: Maybe String -> String nsEnd = maybe "" (const "}") nsScope :: Maybe String -> String nsScope = maybe "" (++ "::") nsString :: Maybe String -> String nsString = fromMaybe "" BNFC-2.9.5/src/BNFC/Backend/Common.hs0000644000000000000000000000515607346545000015051 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} -- | Functions common to different backends. module BNFC.Backend.Common ( unicodeAndSymbols , asciiKeywords , flexEps , switchByPrecedence ) where import Prelude hiding ((<>)) -- import Data.Bifunctor ( second ) import Data.Char import BNFC.CF import BNFC.Utils ( (>.>) ) import BNFC.PrettyPrint -- Andreas, 2020-10-08, issue #292: -- Since the produced lexer for Haskell and Ocaml only recognizes ASCII identifiers, -- but cfgKeywords also contains those using unicode characters, -- we have to reclassify any keyword using non-ASCII characters -- as symbol. unicodeAndSymbols :: CF -> [String] unicodeAndSymbols cf = filter (not . all isAscii) (cfgKeywords cf) ++ cfgSymbols cf asciiKeywords :: CF -> [String] asciiKeywords = filter (all isAscii) . cfgKeywords -- | Representation of the empty word as Flex regular expression flexEps :: String flexEps = "[^.\\n]?" -- UNUSED -- -- | Helper function for c-like languages that generates the code printing -- -- the list separator according to the given precedence level: -- -- -- -- >>> let my_render c = "my_render(\"" <> text c <> "\")" -- -- >>> renderListSepByPrecedence "x" my_render [] -- -- -- -- -- -- >>> renderListSepByPrecedence "x" my_render [(0,",")] -- -- my_render(","); -- -- -- -- >>> renderListSepByPrecedence "x" my_render [(3,";"), (1, "--")] -- -- switch(x) -- -- { -- -- case 3: my_render(";"); break; -- -- case 1: my_render("--"); break; -- -- } -- renderListSepByPrecedence -- :: Doc -- ^ Name of the coercion level variable -- -> (String -> Doc) -- ^ render function -- -> [(Integer, String)] -- ^ separators by precedence -- -> Doc -- renderListSepByPrecedence var render = -- vcat . switchByPrecedence var . map (second $ render >.> (<> ";")) -- Note (Andreas, 2021-05-02): -- @renderListSepByPrecedence@ did not account for mixfix lists (issue #358) -- and has been replaced by the more general @switchByPrecedence@. switchByPrecedence :: Doc -- ^ Name of the coercion level variable/ -> [(Integer, Doc)] -- ^ Content by precedence. -> [Doc] switchByPrecedence var = filter (not . isEmpty . snd) >.> \case [] -> [] [(_,doc)] -> [ doc ] ds -> [ "switch(" <> var <> ")" , codeblock 2 [ "case" <+> integer i <:> doc <+> "break;" | (i, doc) <- ds ] -- , codeblock 2 $ concat -- [ [ "case" <+> integer i <:> doc <+> "break;" | (i, doc) <- init ds ] -- , [ "default" <:> doc | let (i, doc) = last ds ] -- ] ] where a <:> b = a <> ":" <+> b BNFC-2.9.5/src/BNFC/Backend/Common/0000755000000000000000000000000007346545000014506 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/Common/Makefile.hs0000644000000000000000000000254707346545000016567 0ustar0000000000000000{-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Common.Makefile where import Prelude hiding ((<>)) import BNFC.Backend.Base (mkfile, Backend) import BNFC.PrettyPrint -- | Creates a Makefile rule. -- -- >>> mkRule "main" ["file1","file2"] ["do something"] -- main : file1 file2 -- do something -- -- -- >>> mkRule "main" ["program.exe"] [] -- main : program.exe -- -- mkRule :: String -- ^ The target name. -> [String] -- ^ Dependencies. -> [String] -- ^ Recipe. -> Doc mkRule target deps recipe = vcat . concat $ [ [ text target <+> ":" <+> hsep (map text deps) ] , map (("\t" <>) . text) recipe , [ "" ] ] -- | Variable assignment. -- -- >>> mkVar "FOO" "bar" -- FOO=bar -- mkVar :: String -> String -> Doc mkVar n v = text n <> "=" <> text v -- UNUSED: -- -- | Variable referencing. -- -- -- -- >>> mkRefVar "FOO" -- -- ${FOO} -- -- -- mkRefVar :: String -> Doc -- mkRefVar m = case m of -- "" -> empty -- _ -> text $ refVar m -- | Variable referencing. -- -- >>> refVar "FOO" -- "${FOO}" -- refVar :: String -> String refVar m = "${" ++ m ++ "}" -- | Create the Makefile file using the name specified in the option record. -- mkMakefile :: Maybe String -> (String -> Doc) -> Backend mkMakefile (Just m) mkContent = mkfile m ("## " ++) (mkContent m) mkMakefile Nothing _ = return () BNFC-2.9.5/src/BNFC/Backend/Common/NamedVariables.hs0000644000000000000000000001166707346545000017732 0ustar0000000000000000{- BNF Converter: Named instance variables Copyright (C) 2004 Author: Michael Pellauer -} {- ************************************************************** BNF Converter Module Description : This module provides support for languages which need named instance variables. (IE Java, C, C++) It provides a data type to represent the name mapping and utility functions to work with it. Variables are grouped and numbered in a nice way. Author : Michael Pellauer (pellauer@cs.chalmers.se) ************************************************************** The idea of this module is the following (if I got it correctly): In some target languages (e.g. java or C) you need to create a variable name for each non terminal in a given rule. For instance, the following rules: > SomeFunction. A ::= B C D ; could be represented in C by a structure like: @ struct A { B b_; C c_; D d_; } @ (note that this is not exactly the representation produced by bnfc) but if there is several non terminal of the same category, we need to number them. Eg: > SomeFunction. A = B B ; Should become something like: @ struct A { B b_1, b_2; } @ This is what this module does. -} module BNFC.Backend.Common.NamedVariables where import Control.Arrow (left, (&&&)) import Data.Char (toLower) import Data.Either (lefts) import Data.List (nub) import Data.Map (Map) import Text.PrettyPrint (Doc) import qualified Text.PrettyPrint as P import BNFC.CF type IVar = (String, Int) --The type of an instance variable --and a # unique to that type type UserDef = TokenCat --user-defined types -- | A symbol-mapping environment. type SymEnv = KeywordEnv -- | Map keywords to their token name. type KeywordEnv = [(String, String)] -- | Map keywords and user-defined token types to their token name. type SymMap = Map SymKey String data SymKey = Keyword String -- ^ Keyword like "(", "while", "true", ... | Tokentype String -- ^ Token type like "Integer", "Char", ... deriving (Eq, Ord, Show) -- | Converts a list of categories into their types to be used as instance -- variables. If a category appears only once, it is given the number 0, -- if it appears more than once, its occurrences are numbered from 1. ex: -- -- >>> getVars [Cat "A", Cat "B", Cat "A"] -- [("A",1),("B",0),("A",2)] -- getVars :: [Cat] -> [IVar] getVars cs = foldl addVar [] (map identCat cs) where addVar vs = addVar' vs 0 addVar' [] n c = [(c, n)] addVar' (i@(t,x):is) n c = if c == t then if x == 0 then (t, 1) : addVar' is 2 c else i : addVar' is (x+1) c else i : addVar' is n c -- # Create variable names for rules rhs -- This is about creating variable names for the right-hand side of rules. -- In particular, if you have a rule like Foo. Bar ::= A B A, you need to -- create unique variable names for the two instances of category A -- | Anotate the right hand side of a rule with variable names -- for the non-terminals. -- >>> numVars [Left (Cat "A"), Right "+", Left (Cat "B")] -- [Left (A,a_),Right "+",Left (B,b_)] -- >>> numVars [Left (Cat "A"), Left (Cat "A"), Right ";"] -- [Left (A,a_1),Left (A,a_2),Right ";"] numVars :: [Either Cat a] -> [Either (Cat, Doc) a] numVars cats = -- First, we anotate each Left _ with a variable name (not univque) let withNames = map (left (id &&& (varName . identCat . normCat))) cats -- next, the function f' adds numbers where needed... in f' [] withNames where f' _ [] = [] f' env (Right t:xs) = Right t:f' env xs f' env (Left (c,n):xs) = -- we should use n_i as var name let i = maybe 1 (+1) (lookup n env) -- Is there more use of the name u_ ? thereIsMore = n `elem` map snd (lefts xs) vname = P.text n P.<> if i > 1 || thereIsMore then P.int i else P.empty in Left (c, vname) : f' ((n,i):env) xs --This fixes the problem with coercions. fixCoercions :: [(Cat, [Rule])] -> [(Cat, [Rule])] fixCoercions rs = nub (fixAll rs rs) where fixCoercion :: Cat -> [(Cat, [Rule])] -> [Rule] fixCoercion _ [] = [] fixCoercion cat ((c,rules):cats) = if normCat c == normCat cat then rules ++ fixCoercion cat cats else fixCoercion cat cats fixAll :: [(Cat, [Rule])] -> [(Cat, [Rule])] -> [(Cat, [Rule])] fixAll _ [] = [] fixAll top ((cat,_):cats) = if isCoercion (noPosition $ catToStr cat) -- This is weird: isCoercion is supposed to be applied to functions!!!! then fixAll top cats else (normCat cat, fixCoercion cat top) : fixAll top cats --A generic variable name for C-like languages. varName :: String -> String varName c = map toLower c ++ "_" --this makes var names a little cleaner. showNum :: Int -> String showNum n = if n == 0 then "" else show n -- Makes the first letter a lowercase. firstLowerCase :: String -> String firstLowerCase "" = "" firstLowerCase (a:b) = toLower a:b BNFC-2.9.5/src/BNFC/Backend/Common/OOAbstract.hs0000644000000000000000000000710307346545000017044 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {- BNF Converter: Datastructure for object-oriented abstract syntax generators Copyright (C) 2006 Author: Aarne Ranta Description : This module defines a data structure that is used for generating abstract syntax in cpp_stl. It should be used in other STL modules as well, and could be used for object-oriented languages in general, to avoid duplicated work. Author : Aarne Ranta (aarne@cs.chalmers.se) Created : 29 August, 2006 -} module BNFC.Backend.Common.OOAbstract where import Data.Char (toLower) import qualified Data.List as List import Data.Maybe import BNFC.CF -- A datastructure more appropriate than CF data CAbs = CAbs { tokentypes :: [String], -- user non-position token types listtypes :: [(String,Bool)], -- list types used, whether of classes absclasses :: [String], -- grammar-def cats, normalized names conclasses :: [Fun], -- constructors, except list ones signatures :: [(String,[CAbsRule])], -- rules for each class, incl. pos tokens postokens :: [String], -- position token types defineds :: [Fun] -- defined (non-)constructors } -- (valcat,(constr,args)), True = is class (not basic), class variable stored type CAbsRule = (Fun,[(String,Bool,String)]) -- all those names that denote classes in C++ allClasses :: CAbs -> [String] allClasses ca = absclasses ca ++ conclasses ca ++ map fst (listtypes ca) ++ postokens ca -- all those names that denote non-class types in C++ allNonClasses :: CAbs -> [String] allNonClasses ca = map fst basetypes ++ tokentypes ca cf2cabs :: CF -> CAbs cf2cabs cf = CAbs { tokentypes = toks , listtypes = [(c, snd (status (drop 4 c))) | -- remove "List" from "ListC" c <- map (identCat . normCat) lists] , absclasses = List.nub $ map (identCat . normCat) cats -- NB: does not include list categories , conclasses = mapMaybe testRule $ cfgRules cf , signatures = posdata ++ map normSig (cf2data cf) , postokens = pos , defineds = defs } where (pos, toks) = List.partition (isPositionCat cf) $ map fst $ tokenPragmas cf (lists,cats) = List.partition isList $ allCatsNorm cf testRule (Rule f c _ _) | isList (wpThing c) = Nothing | funName f == "_" = Nothing | otherwise = Just $ funName f normSig (c,fcs) = (identCat c,[(f, classVars (map (status . identCat) cs)) | (f,cs) <- fcs]) posdata = [("Visitable", -- to give superclass [(c,[("String",False,"string_"),("Integer",False,"integer_")])]) | c<-pos] status cat = (cat, notElem cat (map fst basetypes ++ toks)) defs = [ funName f | FunDef (Define f _ _ _) <- cfgPragmas cf] classVars :: [(String,Bool)] -> [(String,Bool,String)] classVars cs = [(c,b,s) | ((c,b),s) <- zip cs (vars [] (map (classVar . fst) cs))] --- creating new names is quadratic, but parameter lists are short --- this should conform with Michael's naming vars seen = \case [] -> [] v:vs -> case length (filter (==v) seen) of 0 | elem v vs -> (v ++ "1"): vars (v:seen) vs 0 -> v : vars (v:seen) vs n -> (v ++ show (n+1)) : vars (v:seen) vs basetypes :: [ (String, String) ] basetypes = [ ("Integer","int"), ("Char", "char"), ("Double", "double"), ("String", "std::string"), ("Ident", "std::string") ] classVar :: String -> String classVar c = map toLower c ++ "_" pointerIf :: Bool -> String -> String pointerIf b v = if b then "*" ++ v else v BNFC-2.9.5/src/BNFC/Backend/Common/StrUtils.hs0000644000000000000000000000225107346545000016633 0ustar0000000000000000module BNFC.Backend.Common.StrUtils where import Data.Char (ord) -- | Function that, given an input string, renders it either as a char (if -- it has legth 1) or a string. It should also excape characters correctly. -- The first returned value is the 'type' of the string: either C for char -- or S for string. (used in the C printer to choose the right rendering -- function) -- e.g. -- >>> renderCharOrString "a" -- ('C',"'a'") -- >>> renderCharOrString "abc" -- ('S',"\"abc\"") -- >>> renderCharOrString "'" -- ('C',"'\\''") -- >>> renderCharOrString "\"\\'" -- ('S',"\"\\\"\\\\\\'\"") renderCharOrString :: String -> (Char, String) renderCharOrString [c] | ord c <= 255 = ('C', show c) -- using show should quote ' renderCharOrString s = ('S', "\"" ++ escapeChars s ++ "\"") -- | Helper function that escapes characters in strings -- >>> escapeChars "\\" -- "\\\\" -- >>> escapeChars "\"" -- "\\\"" -- >>> escapeChars "'" -- "\\'" escapeChars :: String -> String escapeChars [] = [] escapeChars ('\\':xs) = '\\' : '\\' : escapeChars xs escapeChars ('\"':xs) = '\\' : '\"' : escapeChars xs escapeChars ('\'':xs) = '\\' : '\'' : escapeChars xs escapeChars (x:xs) = x : escapeChars xs BNFC-2.9.5/src/BNFC/Backend/Haskell.hs0000644000000000000000000004052107346545000015177 0ustar0000000000000000{- BNF Converter: Haskell main file Copyright (C) 2004 Author: Markus Forsberg, Peter Gammie, Aarne Ranta -} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Haskell (makeHaskell, AlexVersion(..), makefile, testfile) where import qualified Control.Monad as Ctrl import Data.Maybe (isJust) import System.FilePath ((<.>), (), pathSeparator) import Text.Printf (printf) import Text.PrettyPrint import BNFC.Backend.Agda import BNFC.Backend.Base import BNFC.Backend.Haskell.CFtoHappy import BNFC.Backend.Haskell.CFtoAlex3 import BNFC.Backend.Haskell.CFtoAbstract import BNFC.Backend.Haskell.CFtoTemplate import BNFC.Backend.Haskell.CFtoPrinter import BNFC.Backend.Haskell.CFtoLayout import BNFC.Backend.Haskell.HsOpts import BNFC.Backend.Haskell.MkErrM import BNFC.Backend.Haskell.Utils import BNFC.Backend.Txt2Tag import BNFC.Backend.XML (makeXML) import qualified BNFC.Backend.Common.Makefile as Makefile import BNFC.CF import BNFC.Options ( SharedOptions(..), TokenText(..), AlexVersion(..), HappyMode(..) , isDefault, printOptions ) import BNFC.Utils (when, table, getZonedTimeTruncatedToSeconds) -- | Entrypoint for the Haskell backend. makeHaskell :: SharedOptions -> CF -> Backend makeHaskell opts cf = do -- Get current time in printable form. time <- liftIO $ show <$> getZonedTimeTruncatedToSeconds let absMod = absFileM opts lexMod = alexFileM opts parMod = happyFileM opts prMod = printerFileM opts layMod = layoutFileM opts errMod = errFileM opts do -- Generate abstract syntax and pretty printer. mkfile (absFile opts) comment $ cf2Abstract opts absMod cf mkfile (printerFile opts) comment $ cf2Printer (tokenText opts) (functor opts) False prMod absMod cf -- Generate Alex lexer. Layout is resolved after lexing. case alexMode opts of Alex3 -> do mkfile (alexFile opts) commentWithEmacsModeHint $ cf2alex3 lexMod (tokenText opts) cf liftIO $ printf "Use Alex 3 to compile %s.\n" (alexFile opts) Ctrl.when (hasLayout cf) $ mkfile (layoutFile opts) comment $ cf2Layout layMod lexMod cf -- Generate Happy parser and matching test program. do mkfile (happyFile opts) commentWithEmacsModeHint $ cf2Happy parMod absMod lexMod (glr opts) (tokenText opts) (functor opts) cf -- liftIO $ printf "%s Tested with Happy 1.15\n" (happyFile opts) mkfile (tFile opts) comment $ testfile opts cf -- Both Happy parser and skeleton (template) rely on Err. mkfile (errFile opts) comment $ mkErrM errMod mkfile (templateFile opts) comment $ cf2Template (templateFileM opts) absMod (functor opts) cf -- Generate txt2tags documentation. mkfile (txtFile opts) t2tComment $ cfToTxt (lang opts) cf -- Generate XML and DTD printers. case xml opts of 2 -> makeXML opts True cf 1 -> makeXML opts False cf _ -> return () -- Generate Agda bindings for AST, Printer and Parser. Ctrl.when (agda opts) $ makeAgda time opts cf -- Generate Makefile. Makefile.mkMakefile (optMake opts) $ makefile opts cf -- | Generate the makefile (old version, with just one "all" target). _oldMakefile :: Options -> String -- ^ Filename of the makefile. -> Doc -- ^ Content of the makefile. _oldMakefile opts makeFile = vcat [ Makefile.mkRule "all" [] $ concat $ [ [ unwords $ [ "happy -gca" ] ++ glrParams ++ [ happyFile opts ] ] , [ "alex -g " ++ alexFile opts ] ] , cleanRule opts , distCleanRule opts makeFile ] where glrParams :: [String] glrParams = when (glr opts == GLR) $ [ "--glr", "--decode" ] -- | Rule to clean GHC and Latex generated files. cleanRule :: Options -> Doc cleanRule opts = Makefile.mkRule "clean" [] $ concat $ [ [ rmGen ] , when (agda opts) rmAgda ] where rmGen = unwords $ [ "-rm", "-f" ] ++ map prefix gen gen = concat [ genHs, genLtx, genAg ] genHs = [ "*.hi", "*.o" ] genLtx = [ "*.log", "*.aux", "*.dvi" ] genAg = when (agda opts) $ [ "*.agdai" ] rmAgda = [ "-rm -rf MAlonzo" ] prefix = if null dir then id else (dir ) dir = codeDir opts -- | Rule to clean all files generated by BNFC and the subsequent tools. distCleanRule :: Options -> String -> Doc distCleanRule opts makeFile = Makefile.mkRule "distclean" ["clean"] $ [ unwords . concat $ [ [ "-rm -f" ] -- Generated files that have a .bak variant , concatMap (\ f -> alsoBak (f opts)) [ absFile -- Abs.hs , composOpFile -- ComposOp.hs , txtFile -- Doc.txt , errFile -- ErrM.hs , layoutFile -- Layout.hs , alexFile -- Lex.x , happyFile -- Par.y , printerFile -- Print.hs , templateFile -- Skel.hs , tFile -- Test.hs , xmlFile -- XML.hs , agdaASTFile -- AST.agda , agdaParserFile -- Parser.agda , agdaLibFile -- IOLib.agda , agdaMainFile -- Main.agda , (\ opts -> dir ++ lang opts ++ ".dtd") ] -- Files that have no .bak variant , map (\ (file, ext) -> mkFile withLang file ext opts) [ ("Test" , "") , ("Lex" , "hs") , ("Par" , "hs") , ("Par" , "info") , ("ParData" , "hs") -- only if --glr ] , [ "Main" | agda opts ] , [ makeFile ] ] , if null dir then "" else "-rmdir -p " ++ dir ] where dir = let d = codeDir opts in if null d then "" else d ++ [pathSeparator] alsoBak :: FilePath -> [FilePath] alsoBak s = [ s, s <.> "bak" ] makefileHeader :: Options -> Doc makefileHeader Options{ agda, glr } = vcat [ "# Makefile for building the parser and test program." , "" , when agda $ "AGDA = agda" , "GHC = ghc" , "HAPPY = happy" , hsep $ concat [ [ "HAPPY_OPTS = --array --info" ] , if glr == GLR then [ "--glr --decode" ] else [ "--ghc --coerce" ] -- These options currently (2021-02-14) do not work with GLR mode -- see https://github.com/simonmar/happy/issues/173 ] , "ALEX = alex" , "ALEX_OPTS = --ghc" , "" ] -- | Generate the makefile. makefile :: Options -> CF -> String -- ^ Filename of the makefile. -> Doc -- ^ Content of the makefile. makefile opts cf makeFile = vcat [ makefileHeader opts , phonyRule , defaultRule , vcat [ "# Rules for building the parser." , "" ] -- If option -o was given, we have no access to the grammar file -- from the Makefile. Thus, we have to drop the rule for -- reinvokation of bnfc. , when (isDefault outDir opts) $ bnfcRule , happyRule , alexRule , testParserRule , when (agda opts) $ agdaRule , vcat [ "# Rules for cleaning generated files." , "" ] , cleanRule opts , distCleanRule opts makeFile , "# EOF" ] where -- | List non-file targets here. phonyRule :: Doc phonyRule = vcat [ "# List of goals not corresponding to file names." , "" , Makefile.mkRule ".PHONY" [ "all", "clean", "distclean" ] [] ] -- | Default: build test parser(s). defaultRule :: Doc defaultRule = vcat [ "# Default goal." , "" , Makefile.mkRule "all" tgts [] ] where tgts = concat $ [ [ tFileExe opts ] , [ "Main" | agda opts ] ] -- | Rule to reinvoke @bnfc@ to updated parser. -- Reinvokation should not recreate @Makefile@! bnfcRule :: Doc bnfcRule = Makefile.mkRule tgts [ lbnfFile opts ] [ recipe ] where recipe = unwords [ "bnfc", printOptions opts{ optMake = Nothing } ] tgts = unwords . map ($ opts) . concat $ [ [ absFile ] , [ layoutFile | lay ] , [ alexFile, happyFile, printerFile, tFile ] , when (agda opts) [ agdaASTFile, agdaParserFile, agdaLibFile, agdaMainFile ] ] lay :: Bool lay = hasLayout cf -- | Rule to invoke @happy@. happyRule :: Doc happyRule = Makefile.mkRule "%.hs" [ "%.y" ] [ "${HAPPY} ${HAPPY_OPTS} $<" ] -- | Rule to invoke @alex@. alexRule :: Doc alexRule = Makefile.mkRule "%.hs" [ "%.x" ] [ "${ALEX} ${ALEX_OPTS} $<" ] -- | Rule to build Haskell test parser. testParserRule :: Doc testParserRule = Makefile.mkRule tgt deps [ "${GHC} ${GHC_OPTS} $@" ] where tgt :: String tgt = tFileExe opts deps :: [String] deps = map ($ opts) $ concat [ [ absFile ] , [ layoutFile | lay ] , [ alexFileHs , happyFileHs , printerFile , tFile ] ] -- | Rule to build Agda parser. agdaRule :: Doc agdaRule = Makefile.mkRule "Main" deps [ "${AGDA} --no-libraries --ghc --ghc-flag=-Wwarn $<" ] where deps = map ($ opts) $ concat [ [ agdaMainFile -- must be first! , agdaASTFile , agdaParserFile , agdaLibFile -- Haskell modules bound by Agda modules: , errFile ] , [ layoutFile | lay ] , [ alexFileHs , happyFileHs , printerFile ] ] testfile :: Options -> CF -> String testfile opts cf = unlines $ concat $ [ [ "-- | Program to test parser." , "" , "module Main where" , "" , "import Prelude" , " ( ($), (.)" ] , [ " , Bool(..)" | lay ] , [ " , Either(..)" , " , Int, (>)" , " , String, (++), concat, unlines" , " , Show, show" , " , IO, (>>), (>>=), mapM_, putStrLn" , " , FilePath" ] , [ " , getContents, readFile" | tokenText opts == StringToken ] , [ " , error, flip, map, replicate, sequence_, zip" | use_glr ] , [ " )" ] , case tokenText opts of StringToken -> [] TextToken -> [ "import Data.Text.IO ( getContents, readFile )" , "import qualified Data.Text" ] ByteStringToken -> [ "import Data.ByteString.Char8 ( getContents, readFile )" , "import qualified Data.ByteString.Char8 as BS" ] , [ "import System.Environment ( getArgs )" , "import System.Exit ( exitFailure )" , "import Control.Monad ( when )" , "" ] , table "" $ concat [ [ [ "import " , absFileM opts , " (" ++ if_glr impTopCat ++ ")" ] ] , [ [ "import " , layoutFileM opts , " ( resolveLayout )" ] | lay ] , [ [ "import " , alexFileM opts , " ( Token, mkPosToken )" ] , [ "import " , happyFileM opts , " ( " ++ impParser ++ ", myLexer" ++ impParGLR ++ " )" ] , [ "import " , printerFileM opts , " ( Print, printTree )" ] , [ "import " , templateFileM opts , " ()" ] ] , [ [ "import " , xmlFileM opts , " ( XPrint, printXML )" ] | use_xml ] ] , [ "import qualified Data.Map ( Map, lookup, toList )" | use_glr ] , [ "import Data.Maybe ( fromJust )" | use_glr ] , [ "" , "type Err = Either String" , if use_glr then "type ParseFun a = [[Token]] -> (GLRResult, GLR_Output (Err a))" else "type ParseFun a = [Token] -> Err a" , "type Verbosity = Int" , "" , "putStrV :: Verbosity -> String -> IO ()" , "putStrV v s = when (v > 1) $ putStrLn s" , "" , "runFile :: (" ++ xpr ++ if_glr "TreeDecode a, " ++ "Print a, Show a) => Verbosity -> ParseFun a -> FilePath -> IO ()" , "runFile v p f = putStrLn f >> readFile f >>= run v p" , "" , "run :: (" ++ xpr ++ if_glr "TreeDecode a, " ++ "Print a, Show a) => Verbosity -> ParseFun a -> " ++ tokenTextType (tokenText opts) ++ " -> IO ()" , (if use_glr then runGlr else runStd use_xml) myLLexer , "showTree :: (Show a, Print a) => Int -> a -> IO ()" , "showTree v tree = do" , " putStrV v $ \"\\n[Abstract Syntax]\\n\\n\" ++ show tree" , " putStrV v $ \"\\n[Linearized tree]\\n\\n\" ++ printTree tree" , "" , "usage :: IO ()" , "usage = do" , " putStrLn $ unlines" , " [ \"usage: Call with one of the following argument combinations:\"" , " , \" --help Display this help message.\"" , " , \" (no arguments) Parse stdin verbosely.\"" , " , \" (files) Parse content of files verbosely.\"" , " , \" -s (files) Silent mode. Parse content of files silently.\"" , " ]" , "" , "main :: IO ()" , "main = do" , " args <- getArgs" , " case args of" , " [\"--help\"] -> usage" , " [] -> getContents >>= run 2 " ++ firstParser , " \"-s\":fs -> mapM_ (runFile 0 " ++ firstParser ++ ") fs" , " fs -> mapM_ (runFile 2 " ++ firstParser ++ ") fs" , "" ] , if_glr $ [ "the_parser :: ParseFun " ++ catToStr topType , "the_parser = lift_parser " ++ render (parserName topType) , "" , liftParser ] ] where lay = isJust hasTopLevelLayout || not (null layoutKeywords) use_xml = xml opts > 0 xpr = if use_xml then "XPrint a, " else "" use_glr = glr opts == GLR if_glr :: Monoid a => a -> a if_glr = when use_glr firstParser = if use_glr then "the_parser" else impParser impParser = render (parserName topType) topType = firstEntry cf impTopCat = unwords [ "", identCat topType, "" ] impParGLR = if_glr ", GLRResult(..), Branch, ForestId, TreeDecode(..), decode" myLLexer atom | lay = unwords [ "resolveLayout", show useTopLevelLayout, "$ myLexer", atom] | True = unwords [ "myLexer", atom] (hasTopLevelLayout, layoutKeywords, _) = layoutPragmas cf useTopLevelLayout = isJust hasTopLevelLayout runStd :: Bool -> (String -> String) -> String runStd xml myLLexer = unlines $ concat [ [ "run v p s =" , " case p ts of" , " Left err -> do" , " putStrLn \"\\nParse Failed...\\n\"" , " putStrV v \"Tokens:\"" , " mapM_ (putStrV v . showPosToken . mkPosToken) ts" -- , " putStrV v $ show ts" , " putStrLn err" , " exitFailure" , " Right tree -> do" , " putStrLn \"\\nParse Successful!\"" , " showTree v tree" ] , [ " putStrV v $ \"\\n[XML]\\n\\n\" ++ printXML tree" | xml ] , [ " where" , " ts = " ++ myLLexer "s" , " showPosToken ((l,c),t) = concat [ show l, \":\", show c, \"\\t\", show t ]" ] ] runGlr :: (String -> String) -> String runGlr myLLexer = unlines [ "run v p s" , " = let ts = map (:[]) $ " ++ myLLexer "s" , " (raw_output, simple_output) = p ts in" , " case simple_output of" , " GLR_Fail major minor -> do" , " putStrLn major" , " putStrV v minor" , " GLR_Result df trees -> do" , " putStrLn \"\\nParse Successful!\"" , " case trees of" , " [] -> error \"No results but parse succeeded?\"" , " [Right x] -> showTree v x" , " xs@(_:_) -> showSeveralTrees v xs" , " where" , " showSeveralTrees :: (Print b, Show b) => Int -> [Err b] -> IO ()" , " showSeveralTrees v trees" , " = sequence_ " , " [ do putStrV v (replicate 40 '-')" , " putStrV v $ \"Parse number: \" ++ show n" , " showTree v t" , " | (Right t,n) <- zip trees [1..]" , " ]" ] liftParser :: String liftParser = unlines [ "type Forest = Data.Map.Map ForestId [Branch] -- omitted in ParX export." , "data GLR_Output a" , " = GLR_Result { pruned_decode :: (Forest -> Forest) -> [a]" , " , semantic_result :: [a]" , " }" , " | GLR_Fail { main_message :: String" , " , extra_info :: String" , " }" , "" , "lift_parser" , " :: (TreeDecode a, Show a, Print a)" , " => ([[Token]] -> GLRResult) -> ParseFun a" , "lift_parser parser ts" , " = let result = parser ts in" , " (\\o -> (result, o)) $" , " case result of" , " ParseError ts f -> GLR_Fail \"Parse failed, unexpected token(s)\\n\"" , " (\"Tokens: \" ++ show ts)" , " ParseEOF f -> GLR_Fail \"Parse failed, unexpected EOF\\n\"" , " (\"Partial forest:\\n\"" , " ++ unlines (map show $ Data.Map.toList f))" , " ParseOK r f -> let find f = fromJust . ((flip Data.Map.lookup) f)" , " dec_fn f = decode (find f) r" , " in GLR_Result (\\ff -> dec_fn $ ff f) (dec_fn f)" ] BNFC-2.9.5/src/BNFC/Backend/Haskell/0000755000000000000000000000000007346545000014641 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/Haskell/CFtoAbstract.hs0000644000000000000000000003026007346545000017515 0ustar0000000000000000{- BNF Converter: Abstract syntax Generator Copyright (C) 2004 Author: Markus Forsberg -} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE RecordWildCards #-} module BNFC.Backend.Haskell.CFtoAbstract ( cf2Abstract , DefCfg(..), definedRules', definedRules ) where import Prelude hiding ((<>)) import Data.Either (isRight) import Data.Maybe import qualified Data.List as List import BNFC.CF import BNFC.Options ( SharedOptions(..), TokenText(..) ) import BNFC.PrettyPrint import BNFC.Utils ( when, applyWhen ) import BNFC.Backend.Haskell.Utils ( avoidReservedWords, catToType, mkDefName , tokenTextImport, tokenTextType, typeToHaskell' , posType, posConstr, noPosConstr , hasPositionClass, hasPositionMethod ) -- | Create a Haskell module containing data type definitions for the abstract syntax. cf2Abstract :: SharedOptions -> String -- ^ Module name. -> CF -- ^ Grammar. -> Doc -- tokenText :: TokenText -- ^ Use @ByteString@ or @Text@ instead of @String@? -- generic :: Bool -- ^ Derive @Data@, Generic@, @Typeable@? -- functor :: Bool -- ^ Make the tree a functor? cf2Abstract Options{ lang, tokenText, generic, functor } name cf = vsep . concat $ [ [] -- Modules header , [ vcat . concat $ [ [ "{-# LANGUAGE DeriveDataTypeable #-}" | gen ] , [ "{-# LANGUAGE DeriveGeneric #-}" | gen ] , [ "{-# LANGUAGE DeriveTraversable #-}" | fun ] , [ "{-# LANGUAGE FlexibleInstances #-}" | fun ] , [ "{-# LANGUAGE GeneralizedNewtypeDeriving #-}" | hasIdentLikeNoPos ] -- for IsString , [ "{-# LANGUAGE LambdaCase #-}" | fun ] , [ "{-# LANGUAGE PatternSynonyms #-}" | defPosition ] , [ "{-# LANGUAGE OverloadedStrings #-}" | not (null definitions), tokenText /= StringToken ] ] ] , [ "-- | The abstract syntax of language" <+> text lang <> "." ] , [ hsep [ "module", text name, "where" ] ] -- Imports , [ vcat . concat $ [ [ text $ "import Prelude (" ++ List.intercalate ", " typeImports ++ ")" | not $ null typeImports ] , [ prettyList 2 "import qualified Prelude as C" "(" ")" "," $ qualifiedPreludeImports | not $ null qualifiedPreludeImports ] , [ "import qualified Data.String" | hasIdentLikeNoPos ] -- for IsString ] ] , [ vcat . concat $ [ when hasTextualToks $ map text $ tokenTextImport tokenText , [ "import qualified Data.Data as C (Data, Typeable)" | gen ] , [ "import qualified GHC.Generics as C (Generic)" | gen ] ] ] -- AST types , map (prData functor (derivingClasses functor)) datas -- Smart constructors , definitions -- Token definition types , (`map` specialCats cf) $ \ c -> let hasPos = isPositionCat cf c in prSpecialData tokenText hasPos (derivingClassesTokenType hasPos) c -- BNFC'Position type -- We generate these synonyms for position info when --functor, -- regardless whether it is used in the abstract syntax. -- It may be used in the parser. , [ vcat [ "-- | Start position (line, column) of something." , "" , "type" <+> posType <+> "=" <+> "C.Maybe (C.Int, C.Int)" , "" , "pattern" <+> noPosConstr <+> "::" <+> posType , "pattern" <+> noPosConstr <+> "=" <+> "C.Nothing" , "" , "pattern" <+> posConstr <+> ":: C.Int -> C.Int ->" <+> posType , "pattern" <+> posConstr <+> "line col =" <+> "C.Just (line, col)" ] | defPosition ] -- HasPosition class , [ vcat [ "-- | Get the start position of something." , "" , "class" <+> hasPositionClass <+> "a where" , nest 2 $ hasPositionMethod <+> ":: a ->" <+> posType ] | hasPosition ] , when functor $ map instanceHasPositionData datas , map instanceHasPositionTokenType positionCats , [ "" ] -- ensure final newline ] where definitions = definedRules functor cf datas = cf2data cf positionCats = filter (isPositionCat cf) $ specialCats cf hasIdentLikeNoPos = hasIdentLikeTokens cf hasTextualToks = hasTextualTokens cf hasPosToks = hasPositionTokens cf hasData = not (null datas) -- @defPosition@: should the @BNCF'Position@ type be defined? defPosition = hasPosToks || functor -- @hasPosition@: should the @HasPosition@ class be defined? hasPosition = hasPosToks || fun gen = generic && hasData fun = functor && hasData stdClasses = [ "Eq", "Ord", "Show", "Read" ] funClasses = [ "Functor", "Foldable", "Traversable" ] genClasses = [ "Data", "Typeable", "Generic" ] derivingClasses functor = map ("C." ++) $ concat [ stdClasses , when functor funClasses , when generic genClasses ] derivingClassesTokenType hasPos = concat [ derivingClasses False , [ "Data.String.IsString" | not hasPos ] ] -- import Prelude (Char, Double, Integer, String) typeImports = filter (\ s -> hasData && s `elem` cfgLiterals cf || hasTextualToks && tokenText == StringToken && s == "String") baseTokenCatNames qualifiedPreludeImports = concat [ [ text $ List.intercalate ", " stdClasses | hasTextualToks || hasData ] , [ text $ List.intercalate ", " funClasses | fun ] , [ text $ "Int, Maybe(..)" | defPosition ] ] -- | -- -- >>> prData False ["Eq", "Ord", "Show", "Read"] (Cat "C", [("C1", [Cat "C"]), ("CIdent", [Cat "Ident"])]) -- data C = C1 C | CIdent Ident -- deriving (Eq, Ord, Show, Read) -- -- Note that the layout adapts if it does not fit in one line: -- >>> prData False ["Show"] (Cat "C", [("CAbracadabra",[]),("CEbrecedebre",[]),("CIbricidibri",[]),("CObrocodobro",[]),("CUbrucudubru",[])]) -- data C -- = CAbracadabra -- | CEbrecedebre -- | CIbricidibri -- | CObrocodobro -- | CUbrucudubru -- deriving (Show) -- -- If the first argument is @True@, generate a functor: -- >>> prData True ["Show", "Functor"] (Cat "C", [("C1", [Cat "C"]), ("CIdent", [TokenCat "Ident"])]) -- type C = C' BNFC'Position -- data C' a = C1 a (C' a) | CIdent a Ident -- deriving (Show, Functor) -- -- The case for lists: -- >>> prData True ["Show", "Functor"] (Cat "ExpList", [("Exps", [ListCat (Cat "Exp")])]) -- type ExpList = ExpList' BNFC'Position -- data ExpList' a = Exps a [Exp' a] -- deriving (Show, Functor) -- prData :: Bool -> [String] -> Data -> Doc prData functor derivingClasses (cat,rules) = vcat $ concat [ [ hsep [ "type", unprimedType, "=", primedType, posType ] | functor ] , [ hang ("data" <+> dataType) 4 $ constructors rules ] , [ nest 2 $ deriving_ derivingClasses ] ] where prRule (fun, cats) = hsep $ concat [ [text fun], ["a" | functor], map prArg cats ] unprimedType = pretty cat primedType = prime unprimedType prime = (<> "'") dataType | functor = primedType <+> "a" |otherwise= unprimedType prArg c | functor && (not .isRight . baseCat) c = catToType prime "a" c | otherwise = catToType id empty c constructors [] = empty constructors (h:t) = sep $ ["=" <+> prRule h] ++ map (("|" <+>) . prRule) t -- | Generate @instance HasPosition@ for a data type. -- -- >>> instanceHasPositionData (Cat "C", [("C1", [Cat "C"]), ("CIdent", [Cat "Ident"])]) -- instance HasPosition C where -- hasPosition = \case -- C1 p _ -> p -- CIdent p _ -> p -- -- >>> instanceHasPositionData (Cat "ExpList", [("Exps", [ListCat (Cat "Exp")])]) -- instance HasPosition ExpList where -- hasPosition = \case -- Exps p _ -> p instanceHasPositionData :: Data -> Doc instanceHasPositionData (cat, rules) = vcat . concat $ [ [ "instance" <+> hasPositionClass <+> dat <+> "where" ] , [ nest 2 $ "hasPosition = \\case" ] , map (\ (c, args) -> nest 4 . hsep $ concat [ [text c, pos], "_" <$ args, ["->", pos] ]) rules ] where dat = text $ catToStr cat pos = "p" -- | Generate a newtype declaration for Ident types -- -- >>> prSpecialData StringToken False ["Show","Data.String.IsString"] catIdent -- newtype Ident = Ident String -- deriving (Show, Data.String.IsString) -- -- >>> prSpecialData StringToken True ["Show"] catIdent -- newtype Ident = Ident ((C.Int, C.Int), String) -- deriving (Show) -- -- >>> prSpecialData TextToken False ["Show"] catIdent -- newtype Ident = Ident Data.Text.Text -- deriving (Show) -- -- >>> prSpecialData ByteStringToken False ["Show"] catIdent -- newtype Ident = Ident BS.ByteString -- deriving (Show) -- -- >>> prSpecialData ByteStringToken True ["Show"] catIdent -- newtype Ident = Ident ((C.Int, C.Int), BS.ByteString) -- deriving (Show) -- prSpecialData :: TokenText -- ^ Format of token content. -> Bool -- ^ If @True@, store the token position. -> [String] -- ^ Derived classes. -> TokenCat -- ^ Token category name. -> Doc prSpecialData tokenText position classes cat = vcat [ hsep [ "newtype", text cat, "=", text cat, contentSpec ] , nest 2 $ deriving_ classes ] where contentSpec | position = parens ( "(C.Int, C.Int), " <> stringType) | otherwise = stringType stringType = text $ tokenTextType tokenText -- | Generate 'deriving' clause -- -- >>> deriving_ ["Show", "Read"] -- deriving (Show, Read) -- deriving_ :: [String] -> Doc deriving_ cls = "deriving" <+> parens (hsep $ punctuate "," $ map text cls) -- | Generate HasPosition instances for Ident types -- -- >>> instanceHasPositionTokenType catIdent -- instance HasPosition Ident where -- hasPosition (Ident (p, _)) = C.Just p instanceHasPositionTokenType :: TokenCat -> Doc instanceHasPositionTokenType cat = vcat [ "instance" <+> hasPositionClass <+> t <+> "where" , nest 2 $ "hasPosition " <> parens (t <+> "(p, _)") <+> "= C.Just p" ] where t = text cat -- | Parametrize 'definedRules' so that it can be used for Agda as well. data DefCfg = DefCfg { sanitizeName :: String -> String , hasType :: String , arrow :: String , lambda :: String , cons :: String , convTok :: String -> String , convLitInt :: Exp -> Exp , polymorphism :: [Base] -> [Base] } haskellDefCfg :: DefCfg haskellDefCfg = DefCfg { sanitizeName = avoidReservedWords [] , hasType = "::" , arrow = "->" , lambda = "\\" , cons = "(:)" , convTok = id , convLitInt = id , polymorphism = id } -- | Generate Haskell code for the @define@d constructors. definedRules :: Bool -> CF -> [Doc] definedRules = definedRules' haskellDefCfg -- | Generate Haskell/Agda code for the @define@d constructors. definedRules' :: DefCfg -> Bool -> CF -> [Doc] definedRules' DefCfg{..} functor cf = map mkDef $ definitions cf where mkDef (Define f args e _) = vcat $ concat [ [ text $ unwords [ fName, hasType, typeToHaskell' arrow $ typ $ wpThing t ] | t <- maybeToList $ sigLookup f cf ] , [ hsep $ concat [ map text [ fName, "=", lambda ] , map text $ addFunctorArg id $ map (sanitizeName . fst) args , [ text arrow, pretty $ sanitize e ] ] ] ] where fName = mkDefName f typ :: Type -> Type typ = applyWhen functor $ \ (FunT ts t) -> FunT (polymorphism $ BaseT "a" : map addParam ts) $ addParam t addParam :: Base -> Base addParam = fmap $ \ x -> if tokTyp x then x else x ++ "' a" tokTyp :: String -> Bool tokTyp = (`elem` literals cf) sanitize :: Exp -> Exp sanitize = \case App x t es | isConsFun x -> App cons t $ map sanitize es | isNilFun x -> App x t $ map sanitize es | tokTyp x -> App (convTok x) t $ map sanitize es | otherwise -> App (sanitizeName x) t $ addFunctorArg (\ x -> App x dummyType []) $ map sanitize es Var x -> Var $ sanitizeName x e@LitInt{} -> convLitInt e e@LitDouble{} -> e e@LitChar{} -> e e@LitString{} -> e -- Functor argument addFunctorArg :: (String -> a) -> [a] -> [a] addFunctorArg g = applyWhen functor (g "_a" :) BNFC-2.9.5/src/BNFC/Backend/Haskell/CFtoAlex3.hs0000644000000000000000000003764607346545000016745 0ustar0000000000000000{- BNF Converter: Alex 3.x Generator Copyright (C) 2012 Author: Antti-Juhani Kaijanaho Copyright (C) 2004 Author: Peter Gammie (C)opyright 2003, {aarne,markus,peteg} at cs dot chalmers dot se -} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Haskell.CFtoAlex3 (cf2alex3) where import Data.Char import qualified Data.List as List import BNFC.Abs import BNFC.CF import BNFC.Lexing ( mkRegMultilineComment ) import BNFC.Options ( TokenText(..) ) import BNFC.PrettyPrint import BNFC.Utils ( table, when, unless ) import BNFC.Backend.Common ( unicodeAndSymbols ) import BNFC.Backend.Haskell.Utils cf2alex3 :: String -> TokenText -> CF -> String cf2alex3 name tokenText cf = unlines $ List.intercalate [""] $ -- equivalent to vsep: intersperse empty lines [ prelude name tokenText , cMacros , rMacros cf , restOfAlex tokenText cf ] prelude :: String -> TokenText -> [String] prelude name tokenText = concat [ [ "-- Lexer definition for use with Alex 3" , "{" , "{-# OPTIONS -fno-warn-incomplete-patterns #-}" , "{-# OPTIONS_GHC -w #-}" , "" , "{-# LANGUAGE PatternSynonyms #-}" , "" , "module " ++ name ++ " where" , "" , "import Prelude" , "" ] , tokenTextImport tokenText , [ "import qualified Data.Bits" , "import Data.Char (ord)" , "import Data.Function (on)" , "import Data.Word (Word8)" , "}" ] ] -- | Character class definitions. cMacros :: [String] cMacros = [ "-- Predefined character classes" , "" , "$c = [A-Z\\192-\\221] # [\\215] -- capital isolatin1 letter (215 = \\times) FIXME" , "$s = [a-z\\222-\\255] # [\\247] -- small isolatin1 letter (247 = \\div ) FIXME" , "$l = [$c $s] -- letter" , "$d = [0-9] -- digit" , "$i = [$l $d _ '] -- identifier character" , "$u = [. \\n] -- universal: any character" ] -- | Regular expressions and lex actions. rMacros :: CF -> [String] rMacros cf = unless (null symbs) [ "-- Symbols and non-identifier-like reserved words" , "" , "@rsyms = " ++ List.intercalate " | " (map (unwords . esc) symbs) ] where symbs = unicodeAndSymbols cf esc :: String -> [String] esc s = if null a then rest else show a : rest where (a, r) = span (\ c -> isAscii c && isAlphaNum c) s rest = case r of [] -> [] c : xs -> (if isPrint c then ['\\',c] else '\\' : show (ord c)) : esc xs restOfAlex :: TokenText -> CF -> [String] restOfAlex tokenText cf = concat [ [ ":-" , "" ] , lexComments $ comments cf , [ "-- Whitespace (skipped)" , "$white+ ;" , "" ] , unless (null $ unicodeAndSymbols cf) [ "-- Symbols" , "@rsyms" , " { tok (eitherResIdent TV) }" , "" ] , userDefTokenTypes , [ "-- Keywords and Ident" , "$l $i*" , " { tok (eitherResIdent TV) }" , "" ] , ifC catString [ "-- String" , "\\\" ([$u # [\\\" \\\\ \\n]] | (\\\\ (\\\" | \\\\ | \\' | n | t | r | f)))* \\\"" , " { tok (TL . unescapeInitTail) }" , "" ] , ifC catChar [ "-- Char" , "\\\' ($u # [\\\' \\\\] | \\\\ [\\\\ \\\' n t r f]) \\'" , " { tok TC }" , "" ] , ifC catInteger [ "-- Integer" , "$d+" , " { tok TI }" , "" ] , ifC catDouble [ "-- Double" , "$d+ \\. $d+ (e (\\-)? $d+)?" , " { tok TD }" , "" ] , [ "{" , "-- | Create a token with position." , "tok :: (" ++ stringType ++ " -> Tok) -> (Posn -> " ++ stringType ++ " -> Token)" , "tok f p = PT p . f" , "" , "-- | Token without position." , "data Tok" ] , map (" " ++) $ table " " $ [ [ "= TK {-# UNPACK #-} !TokSymbol", "-- ^ Reserved word or symbol." ] , [ "| TL !" ++ stringType , "-- ^ String literal." ] , [ "| TI !" ++ stringType , "-- ^ Integer literal." ] , [ "| TV !" ++ stringType , "-- ^ Identifier." ] , [ "| TD !" ++ stringType , "-- ^ Float literal." ] , [ "| TC !" ++ stringType , "-- ^ Character literal." ] ] , [ " | T_" ++ name ++ " !" ++ stringType | name <- tokenNames cf ] , [ " deriving (Eq, Show, Ord)" , "" , "-- | Smart constructor for 'Tok' for the sake of backwards compatibility." , "pattern TS :: " ++ stringType ++ " -> Int -> Tok" , "pattern TS t i = TK (TokSymbol t i)" , "" , "-- | Keyword or symbol tokens have a unique ID." , "data TokSymbol = TokSymbol" , " { tsText :: " ++ stringType , " -- ^ Keyword or symbol text." , " , tsID :: !Int" , " -- ^ Unique ID." , " } deriving (Show)" , "" , "-- | Keyword/symbol equality is determined by the unique ID." , "instance Eq TokSymbol where (==) = (==) `on` tsID" , "" , "-- | Keyword/symbol ordering is determined by the unique ID." , "instance Ord TokSymbol where compare = compare `on` tsID" , "" , "-- | Token with position." , "data Token" , " = PT Posn Tok" , " | Err Posn" , " deriving (Eq, Show, Ord)" , "" , "-- | Pretty print a position." , "printPosn :: Posn -> String" , "printPosn (Pn _ l c) = \"line \" ++ show l ++ \", column \" ++ show c" , "" , "-- | Pretty print the position of the first token in the list." , "tokenPos :: [Token] -> String" , "tokenPos (t:_) = printPosn (tokenPosn t)" , "tokenPos [] = \"end of file\"" , "" , "-- | Get the position of a token." , "tokenPosn :: Token -> Posn" , "tokenPosn (PT p _) = p" , "tokenPosn (Err p) = p" , "" , "-- | Get line and column of a token." , "tokenLineCol :: Token -> (Int, Int)" , "tokenLineCol = posLineCol . tokenPosn" , "" , "-- | Get line and column of a position." , "posLineCol :: Posn -> (Int, Int)" , "posLineCol (Pn _ l c) = (l,c)" , "" , "-- | Convert a token into \"position token\" form." , "mkPosToken :: Token -> ((Int, Int), " ++ stringType ++ ")" , "mkPosToken t = (tokenLineCol t, tokenText t)" , "" , "-- | Convert a token to its text." , "tokenText :: Token -> " ++ stringType , "tokenText t = case t of" , " PT _ (TS s _) -> s" , " PT _ (TL s) -> " ++ applyP stringPack "show s" , " PT _ (TI s) -> s" , " PT _ (TV s) -> s" , " PT _ (TD s) -> s" , " PT _ (TC s) -> s" , " Err _ -> " ++ apply stringPack "\"#error\"" ] , [ " PT _ (T_" ++ name ++ " s) -> s" | name <- tokenNames cf ] , [ "" , "-- | Convert a token to a string." , "prToken :: Token -> String" , "prToken t = " ++ applyP stringUnpack "tokenText t" , "" , "-- | Finite map from text to token organized as binary search tree." , "data BTree" , " = N -- ^ Nil (leaf)." , " | B " ++ stringType ++ " Tok BTree BTree" , " -- ^ Binary node." , " deriving (Show)" , "" , "-- | Convert potential keyword into token or use fallback conversion." , "eitherResIdent :: (" ++ stringType ++ " -> Tok) -> " ++ stringType ++ " -> Tok" , "eitherResIdent tv s = treeFind resWords" , " where" , " treeFind N = tv s" , " treeFind (B a t left right) =" , " case compare s a of" , " LT -> treeFind left" , " GT -> treeFind right" , " EQ -> t" , "" , "-- | The keywords and symbols of the language organized as binary search tree." , "resWords :: BTree" , render $ hang "resWords =" 2 $ pretty $ sorted2tree tokens ] , unless (null tokens) [ " where" , " b s n = B bs (TS bs n)" , " where" , " bs = "++ apply stringPack "s" ] , [ "" , "-- | Unquote string literal." , "unescapeInitTail :: " ++ stringType ++ " -> " ++ stringType ++ "" , "unescapeInitTail = " ++ stringPack ++ " . unesc . tail . " ++ stringUnpack , " where" , " unesc s = case s of" , " '\\\\':c:cs | elem c ['\\\"', '\\\\', '\\\''] -> c : unesc cs" , " '\\\\':'n':cs -> '\\n' : unesc cs" , " '\\\\':'t':cs -> '\\t' : unesc cs" , " '\\\\':'r':cs -> '\\r' : unesc cs" , " '\\\\':'f':cs -> '\\f' : unesc cs" , " '\"':[] -> []" , " c:cs -> c : unesc cs" , " _ -> []" , "" , "-------------------------------------------------------------------" , "-- Alex wrapper code." , "-- A modified \"posn\" wrapper." , "-------------------------------------------------------------------" , "" , "data Posn = Pn !Int !Int !Int" , " deriving (Eq, Show, Ord)" , "" , "alexStartPos :: Posn" , "alexStartPos = Pn 0 1 1" , "" , "alexMove :: Posn -> Char -> Posn" , "alexMove (Pn a l c) '\\t' = Pn (a+1) l (((c+7) `div` 8)*8+1)" , "alexMove (Pn a l c) '\\n' = Pn (a+1) (l+1) 1" , "alexMove (Pn a l c) _ = Pn (a+1) l (c+1)" , "" , "type Byte = Word8" , "" , "type AlexInput = (Posn, -- current position," , " Char, -- previous char" , " [Byte], -- pending bytes on the current char" , " " ++ stringType ++ ") -- current input string" , "" , "tokens :: " ++ stringType ++ " -> [Token]" , "tokens str = go (alexStartPos, '\\n', [], str)" , " where" , " go :: AlexInput -> [Token]" , " go inp@(pos, _, _, str) =" , " case alexScan inp 0 of" , " AlexEOF -> []" , " AlexError (pos, _, _, _) -> [Err pos]" , " AlexSkip inp' len -> go inp'" , " AlexToken inp' len act -> act pos (" ++ stringTake ++ " len str) : (go inp')" , "" , "alexGetByte :: AlexInput -> Maybe (Byte,AlexInput)" , "alexGetByte (p, c, (b:bs), s) = Just (b, (p, c, bs, s))" , "alexGetByte (p, _, [], s) =" , " case " ++ apply stringUncons "s" ++ " of" , " " ++ stringNilP ++ " -> Nothing" , " " ++ stringConsP ++ " ->" , " let p' = alexMove p c" , " (b:bs) = utf8Encode c" , " in p' `seq` Just (b, (p', c, bs, s))" , "" , "alexInputPrevChar :: AlexInput -> Char" , "alexInputPrevChar (p, c, bs, s) = c" , "" , "-- | Encode a Haskell String to a list of Word8 values, in UTF8 format." , "utf8Encode :: Char -> [Word8]" , "utf8Encode = map fromIntegral . go . ord" , " where" , " go oc" , " | oc <= 0x7f = [oc]" , "" , " | oc <= 0x7ff = [ 0xc0 + (oc `Data.Bits.shiftR` 6)" , " , 0x80 + oc Data.Bits..&. 0x3f" , " ]" , "" , " | oc <= 0xffff = [ 0xe0 + (oc `Data.Bits.shiftR` 12)" , " , 0x80 + ((oc `Data.Bits.shiftR` 6) Data.Bits..&. 0x3f)" , " , 0x80 + oc Data.Bits..&. 0x3f" , " ]" , " | otherwise = [ 0xf0 + (oc `Data.Bits.shiftR` 18)" , " , 0x80 + ((oc `Data.Bits.shiftR` 12) Data.Bits..&. 0x3f)" , " , 0x80 + ((oc `Data.Bits.shiftR` 6) Data.Bits..&. 0x3f)" , " , 0x80 + oc Data.Bits..&. 0x3f" , " ]" , "}" ] ] where (stringType, stringTake, stringUncons, stringPack, stringUnpack, stringNilP, stringConsP) = case tokenText of StringToken -> ("String", "take", "", "id", "id", "[]", "(c:s)" ) ByteStringToken -> ("BS.ByteString", "BS.take", "BS.uncons", "BS.pack", "BS.unpack", "Nothing", "Just (c,s)") TextToken -> ("Data.Text.Text", "Data.Text.take", "Data.Text.uncons", "Data.Text.pack", "Data.Text.unpack", "Nothing", "Just (c,s)") apply :: String -> String -> String apply "" s = s apply "id" s = s apply f s = f ++ " " ++ s applyP :: String -> String -> String applyP "" s = s applyP "id" s = s applyP f s = f ++ " (" ++ s ++ ")" ifC :: Monoid m => TokenCat -> m -> m ifC = when . isUsedCat cf . TokenCat lexComments :: ( [(String, String)] -- block comment delimiters , [String] -- line comment initiators ) -> [String] -- Alex declarations lexComments (block, line) = concat $ [ concatMap lexLineComment line , concatMap (uncurry lexBlockComment) block ] lexLineComment :: String -- ^ Line comment start. -> [String] -- ^ Alex declaration. lexLineComment s = [ unwords [ "-- Line comment", show s ] , concat [ "\"", s, "\" [.]* ;" ] , "" ] lexBlockComment :: String -- ^ Start of block comment. -> String -- ^ End of block comment. -> [String] -- ^ Alex declaration. lexBlockComment start end = [ unwords [ "-- Block comment", show start, show end ] , printRegAlex (mkRegMultilineComment start end) ++ " ;" , "" ] userDefTokenTypes :: [String] userDefTokenTypes = concat [ [ "-- token " ++ name , printRegAlex exp , " { tok (eitherResIdent T_" ++ name ++ ") }" , "" ] | (name, exp) <- tokenPragmas cf ] tokens = cfTokens cf -- | Binary search tree. data BTree a = N | B String a (BTree a) (BTree a) instance Pretty a => Pretty (BTree a) where prettyPrec _ N = text "N" prettyPrec n (B k v l r) = parensIf (n > 0) $ hang ("b" <+> text (show k) <+> pretty v) 2 $ sep [ prettyPrec 1 l , prettyPrec 1 r ] -- | Create a balanced search tree from a sorted list. sorted2tree :: [(String,a)] -> BTree a sorted2tree [] = N sorted2tree xs = B x n (sorted2tree t1) (sorted2tree t2) where (t1, (x,n) : t2) = splitAt (length xs `div` 2) xs ------------------------------------------------------------------- -- Inlined version of former @BNFC.Backend.Haskell.RegToAlex@. -- Syntax has changed... ------------------------------------------------------------------- -- modified from pretty-printer generated by the BNF converter -- the top-level printing method printRegAlex :: Reg -> String printRegAlex = render' . prt 0 render' :: [String] -> String render' = \case "[" : ts -> cons "[" $ render' ts "(" : ts -> cons "(" $ render' ts t : "," : ts -> cons t $ space "," $ render' ts t : ")" : ts -> cons t $ cons ")" $ render' ts t : "]" : ts -> cons t $ cons "]" $ render' ts t : ts -> space t $ render' ts _ -> "" where cons s t = s ++ t space t s = if null s then t else t ++ " " ++ s parenth :: [String] -> [String] parenth ss = ["("] ++ ss ++ [")"] -- the printer class does the job class Print a where prt :: Int -> a -> [String] prtList :: [a] -> [String] prtList = concatMap (prt 0) instance Print a => Print [a] where prt _ = prtList instance Print Char where prt _ = \case '\n' -> ["\\n"] '\t' -> ["\\t"] '\r' -> ["\\r"] '\f' -> ["\\f"] c | isAlphaNum c -> [[c]] c | isPrint c -> ['\\':[c]] -- ['\'':c:'\'':[]] -- Does not work for ) c -> ['\\':show (ord c)] prtList = map (concat . prt 0) prPrec :: Int -> Int -> [String] -> [String] prPrec i j = if j prPrec i 2 $ prt 2 reg0 ++ prt 3 reg RAlt reg0 reg -> prPrec i 1 $ concat [prt 1 reg0 , ["|"] , prt 2 reg] RStar reg -> prPrec i 3 $ prt 3 reg ++ ["*"] RPlus reg -> prPrec i 3 $ prt 3 reg ++ ["+"] ROpt reg -> prPrec i 3 $ prt 3 reg ++ ["?"] -- Atomic/parenthesized expressions RMinus reg0 reg -> concat [ ["["], prt 2 reg0 , ["#"] , prt 2 reg, ["]"] ] REps -> ["()"] RChar c -> prt 0 c RAlts str -> concat [["["],prt 0 str,["]"]] RSeqs str -> prPrec i 2 $ prt 0 str RDigit -> ["$d"] RLetter -> ["$l"] RUpper -> ["$c"] RLower -> ["$s"] RAny -> ["$u"] BNFC-2.9.5/src/BNFC/Backend/Haskell/CFtoHappy.hs0000644000000000000000000003225107346545000017035 0ustar0000000000000000{- BNF Converter: Happy Generator Copyright (C) 2004 Author: Markus Forsberg, Aarne Ranta -} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Haskell.CFtoHappy (cf2Happy, convert) where import Prelude hiding ((<>)) import Data.Foldable (toList) import Data.List (intersperse) import BNFC.CF import BNFC.Backend.Common.StrUtils (escapeChars) import BNFC.Backend.Haskell.Utils import BNFC.Options (HappyMode(..), TokenText(..)) import BNFC.PrettyPrint import BNFC.Utils -- Type declarations type Rules = [(NonTerminal,[(Pattern,Action)])] type Pattern = String type Action = String type MetaVar = String -- default naming tokenName :: String tokenName = "Token" -- | Generate a happy parser file from a grammar. cf2Happy :: ModuleName -- ^ This module's name. -> ModuleName -- ^ Abstract syntax module name. -> ModuleName -- ^ Lexer module name. -> HappyMode -- ^ Happy mode. -> TokenText -- ^ Use @ByteString@ or @Text@? -> Bool -- ^ AST is a functor? -> CF -- ^ Grammar. -> String -- ^ Generated code. cf2Happy name absName lexName mode tokenText functor cf = unlines [ header name absName lexName tokenText eps , render $ declarations mode functor eps , render $ tokens cf functor , delimiter , specialRules absName functor tokenText cf , render $ prRules absName functor (rulesForHappy absName functor cf) , "" , footer absName tokenText functor eps cf ] where eps = toList $ allEntryPoints cf -- | Construct the header. header :: ModuleName -> ModuleName -> ModuleName -> TokenText -> [Cat] -> String header modName absName lexName tokenText eps = unlines $ concat [ [ "-- Parser definition for use with Happy" , "{" , "{-# OPTIONS_GHC -fno-warn-incomplete-patterns -fno-warn-overlapping-patterns #-}" , "{-# LANGUAGE PatternSynonyms #-}" , "" , "module " ++ modName , " ( happyError" , " , myLexer" ] , map ((" , " ++) . render . parserName) eps , [ " ) where" , "" , "import Prelude" , "" , "import qualified " ++ absName , "import " ++ lexName ] , tokenTextImport tokenText , [ "" , "}" ] ] -- | The declarations of a happy file. -- >>> declarations Standard False [Cat "A", Cat "B", ListCat (Cat "B")] -- %name pA A -- %name pB B -- %name pListB ListB -- -- no lexer declaration -- %monad { Err } { (>>=) } { return } -- %tokentype {Token} -- -- >>> declarations Standard True [Cat "A", Cat "B", ListCat (Cat "B")] -- %name pA_internal A -- %name pB_internal B -- %name pListB_internal ListB -- -- no lexer declaration -- %monad { Err } { (>>=) } { return } -- %tokentype {Token} declarations :: HappyMode -> Bool -> [Cat] -> Doc declarations mode functor ns = vcat [ vcat $ map generateP ns , case mode of Standard -> "-- no lexer declaration" GLR -> "%lexer { myLexer } { Err _ }", "%monad { Err } { (>>=) } { return }", "%tokentype" <+> braces (text tokenName) ] where generateP n = "%name" <+> parserName n <> (if functor then "_internal" else "") <+> text (identCat n) -- The useless delimiter symbol. delimiter :: String delimiter = "\n%%\n" -- | Generate the list of tokens and their identifiers. tokens :: CF -> Bool -> Doc tokens cf functor -- Andreas, 2019-01-02: "%token" followed by nothing is a Happy parse error. -- Thus, if we have no tokens, do not output anything. | null ts = empty | otherwise = "%token" $$ (nest 2 $ vcat $ map text $ table " " ts) where ts = map prToken (cfTokens cf) ++ specialToks cf functor prToken (t,k) = [ render (convert t), "{ PT _ (TS _ " ++ show k ++ ")", "}" ] -- Happy doesn't allow characters such as åäö to occur in the happy file. This -- is however not a restriction, just a naming paradigm in the happy source file. convert :: String -> Doc convert = quotes . text . escapeChars rulesForHappy :: ModuleName -> Bool -> CF -> Rules rulesForHappy absM functor cf = for (ruleGroups cf) $ \ (cat, rules) -> (cat, map (constructRule absM functor) rules) -- | For every non-terminal, we construct a set of rules. A rule is a sequence -- of terminals and non-terminals, and an action to be performed. -- -- >>> constructRule "Foo" False (npRule "EPlus" (Cat "Exp") [Left (Cat "Exp"), Right "+", Left (Cat "Exp")] Parsable) -- ("Exp '+' Exp","Foo.EPlus $1 $3") -- -- If we're using functors, it adds position value: -- -- >>> constructRule "Foo" True (npRule "EPlus" (Cat "Exp") [Left (Cat "Exp"), Right "+", Left (Cat "Exp")] Parsable) -- ("Exp '+' Exp","(fst $1, Foo.EPlus (fst $1) (snd $1) (snd $3))") -- -- List constructors should not be prefixed by the abstract module name: -- -- >>> constructRule "Foo" False (npRule "(:)" (ListCat (Cat "A")) [Left (Cat "A"), Right",", Left (ListCat (Cat "A"))] Parsable) -- ("A ',' ListA","(:) $1 $3") -- -- >>> constructRule "Foo" False (npRule "(:[])" (ListCat (Cat "A")) [Left (Cat "A")] Parsable) -- ("A","(:[]) $1") -- -- Coercion are much simpler: -- -- >>> constructRule "Foo" True (npRule "_" (Cat "Exp") [Right "(", Left (Cat "Exp"), Right ")"] Parsable) -- ("'(' Exp ')'","(uncurry Foo.BNFC'Position (tokenLineCol $1), (snd $2))") -- constructRule :: IsFun f => String -> Bool -> Rul f -> (Pattern, Action) constructRule absName functor (Rule fun0 _cat rhs Parsable) = (pat, action) where fun = funName fun0 (pat, metavars) = generatePatterns functor rhs action | functor = "(" ++ actionPos id ++ ", " ++ actionValue ++ ")" | otherwise = actionValue actionPos paren = case rhs of [] -> qualify noPosConstr (Left _:_) -> paren "fst $1" (Right _:_) -> paren $ unwords [ "uncurry", qualify posConstr , "(tokenLineCol $1)" ] actionValue | isCoercion fun = unwords metavars | isNilCons fun = unwords (qualify fun : metavars) | functor = unwords (qualify fun : actionPos (\ x -> "(" ++ x ++ ")") : metavars) | otherwise = unwords (qualify fun : metavars) qualify f | isConsFun f || isNilCons f = f | isDefinedRule f = absName ++ "." ++ mkDefName f | otherwise = absName ++ "." ++ f constructRule _ _ (Rule _ _ _ Internal) = undefined -- impossible -- | Generate patterns and a set of metavariables (de Bruijn indices) indicating -- where in the pattern the non-terminal are locate. -- -- >>> generatePatterns False [ Left (Cat "Exp"), Right "+", Left (Cat "Exp") ] -- ("Exp '+' Exp",["$1","$3"]) -- -- >>> generatePatterns True [ Left (Cat "Exp"), Right "+", Left (Cat "Exp") ] -- ("Exp '+' Exp",["(snd $1)","(snd $3)"]) -- generatePatterns :: Bool -> SentForm -> (Pattern, [MetaVar]) generatePatterns _ [] = ("{- empty -}", []) generatePatterns functor its = ( unwords $ for its $ either {-non-term:-} identCat {-term:-} (render . convert) , [ if functor then "(snd $" ++ show i ++ ")" else ('$' : show i) | (i, Left{}) <- zip [1 :: Int ..] its ] ) -- We have now constructed the patterns and actions, -- so the only thing left is to merge them into one string. -- | -- >>> prRules "Foo" False [(Cat "Expr", [("Integer", "Foo.EInt $1"), ("Expr '+' Expr", "Foo.EPlus $1 $3")])] -- Expr :: { Foo.Expr } -- Expr : Integer { Foo.EInt $1 } | Expr '+' Expr { Foo.EPlus $1 $3 } -- -- if there's a lot of cases, print on several lines: -- >>> prRules "" False [(Cat "Expr", [("Abcd", "Action"), ("P2", "A2"), ("P3", "A3"), ("P4", "A4"), ("P5","A5"), ("P6", "A6"), ("P7", "A7"), ("P8", "A8"), ("P9","A9")])] -- Expr :: { Expr } -- Expr -- : Abcd { Action } -- | P2 { A2 } -- | P3 { A3 } -- | P4 { A4 } -- | P5 { A5 } -- | P6 { A6 } -- | P7 { A7 } -- | P8 { A8 } -- | P9 { A9 } -- -- >>> prRules "" False [(Cat "Internal", [])] -- nt has only internal use -- -- -- The functor case: -- >>> prRules "" True [(Cat "Expr", [("Integer", "EInt () $1"), ("Expr '+' Expr", "EPlus () $1 $3")])] -- Expr :: { (BNFC'Position, Expr) } -- Expr : Integer { EInt () $1 } | Expr '+' Expr { EPlus () $1 $3 } -- -- A list with coercion: in the type signature we need to get rid of the -- coercion. -- -- >>> prRules "" True [(ListCat (CoercCat "Exp" 2), [("Exp2", "(:[]) $1"), ("Exp2 ',' ListExp2","(:) $1 $3")])] -- ListExp2 :: { (BNFC'Position, [Exp]) } -- ListExp2 : Exp2 { (:[]) $1 } | Exp2 ',' ListExp2 { (:) $1 $3 } -- prRules :: ModuleName -> Bool -> Rules -> Doc prRules absM functor = vsep . map prOne where prOne (_ , [] ) = empty -- nt has only internal use prOne (nt, (p,a):ls) = vcat [ hsep [ nt', "::", "{", if functor then functorType' nt else type' nt, "}" ] , hang nt' 2 $ sep (pr ":" (p, a) : map (pr "|") ls) ] where nt' = text (identCat nt) pr pre (p,a) = hsep [pre, text p, "{", text a , "}"] type' = catToType qualify empty functorType' nt = hcat ["(", qualify posType, ", ", type' nt, ")"] qualify | null absM = id | otherwise = ((text absM <> ".") <>) -- Finally, some haskell code. footer :: ModuleName -> TokenText -> Bool -> [Cat] -> CF -> String footer absName tokenText functor eps _cf = unlines $ concat [ [ "{" , "" , "type Err = Either String" , "" , "happyError :: [" ++ tokenName ++ "] -> Err a" , "happyError ts = Left $" , " \"syntax error at \" ++ tokenPos ts ++ " , " case ts of" , " [] -> []" , " [Err _] -> \" due to lexer error\"" , unwords [ " t:_ -> \" before `\" ++" , "(prToken t)" -- , tokenTextUnpack tokenText "(prToken t)" , "++ \"'\"" ] , "" , "myLexer :: " ++ tokenTextType tokenText ++ " -> [" ++ tokenName ++ "]" , "myLexer = tokens" , "" ] , when functor [ "-- Entrypoints" , "" , render . vsep $ map mkParserFun eps ] , [ "}" ] ] where mkParserFun cat = vcat [ parserName cat <+> "::" <+> brackets (text tokenName) <+> "-> Err" <+> catToType qualify empty cat , parserName cat <+> "=" <+> "fmap snd" <+> "." <+> parserName cat <> "_internal" ] qualify | null absName = id | otherwise = ((text absName <> ".") <>) -- | GF literals. specialToks :: CF -> Bool -> [[String]] -- ^ A table with three columns (last is "}"). specialToks cf functor = (`map` literals cf) $ \t -> case t of "Ident" -> [ "L_Ident" , "{ PT _ (TV " ++ posn t ++ ")", "}" ] "String" -> [ "L_quoted", "{ PT _ (TL " ++ posn t ++ ")", "}" ] "Integer" -> [ "L_integ ", "{ PT _ (TI " ++ posn t ++ ")", "}" ] "Double" -> [ "L_doubl ", "{ PT _ (TD " ++ posn t ++ ")", "}" ] "Char" -> [ "L_charac", "{ PT _ (TC " ++ posn t ++ ")", "}" ] own -> [ "L_" ++ own,"{ PT _ (T_" ++ own ++ " " ++ posn own ++ ")", "}" ] where posn tokenCat = if isPositionCat cf tokenCat || functor then "_" else "$$" specialRules :: ModuleName -> Bool -> TokenText -> CF -> String specialRules absName functor tokenText cf = unlines . intersperse "" . (`map` literals cf) $ \t -> case t of -- "Ident" -> "Ident :: { Ident }" -- ++++ "Ident : L_ident { Ident $1 }" "String" -> "String :: { " ++ mkTypePart t ++ " }" ++++ "String : L_quoted { " ++ mkBodyPart t ++ " }" "Integer" -> "Integer :: { " ++ mkTypePart t ++ " }" ++++ "Integer : L_integ { " ++ mkBodyPart t ++ " }" "Double" -> "Double :: { " ++ mkTypePart t ++ " }" ++++ "Double : L_doubl { " ++ mkBodyPart t ++ " }" "Char" -> "Char :: { " ++ mkTypePart t ++ " }" ++++ "Char : L_charac { " ++ mkBodyPart t ++ " }" own -> own ++ " :: { " ++ mkTypePart (qualify own) ++ " }" ++++ own ++ " : L_" ++ own ++ " { " ++ mkBodyPart t ++ " }" where mkTypePart tokenCat = if functor then concat [ "(", qualify posType, ", ", tokenCat, ")" ] else tokenCat mkBodyPart tokenCat | functor = "(" ++ unwords ["uncurry", qualify posConstr, "(tokenLineCol $1)"] ++ ", " ++ mkValPart tokenCat ++ ")" | otherwise = mkValPart tokenCat mkValPart tokenCat = case tokenCat of "String" -> if functor then stringUnpack "((\\(PT _ (TL s)) -> s) $1)" else stringUnpack "$1" -- String never has pos "Integer" -> if functor then "(read " ++ stringUnpack "(tokenText $1)" ++ ") :: Integer" else "(read " ++ stringUnpack "$1" ++ ") :: Integer" -- Integer never has pos "Double" -> if functor then "(read " ++ stringUnpack "(tokenText $1)" ++ ") :: Double" else "(read " ++ stringUnpack "$1" ++ ") :: Double" -- Double never has pos "Char" -> if functor then "(read " ++ stringUnpack "(tokenText $1)" ++ ") :: Char" else "(read " ++ stringUnpack "$1" ++ ") :: Char" -- Char never has pos own -> case functor of False -> case isPositionCat cf tokenCat of False -> qualify own ++ " $1" True -> qualify own ++ " (mkPosToken $1)" True -> case isPositionCat cf tokenCat of False -> qualify own ++ " (tokenText $1)" True -> qualify own ++ " (mkPosToken $1)" stringUnpack = tokenTextUnpack tokenText qualify | null absName = id | otherwise = ((absName ++ ".") ++) BNFC-2.9.5/src/BNFC/Backend/Haskell/CFtoLayout.hs0000644000000000000000000003630207346545000017232 0ustar0000000000000000{- BNF Converter: Layout handling Generator Copyright (C) 2004 Author: Aarne Ranta Copyright (C) 2005 Bjorn Bringert -} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Haskell.CFtoLayout where import Data.Maybe ( fromMaybe, mapMaybe ) import BNFC.CF import BNFC.PrettyPrint import BNFC.Utils ( caseMaybe, for, whenJust ) data TokSymbol = TokSymbol String Int deriving Show data LayoutDelimiters = LayoutDelimiters TokSymbol (Maybe TokSymbol) (Maybe TokSymbol) deriving Show cf2Layout :: String -> String -> CF -> String cf2Layout layName lexName cf = unlines $ concat [ [ "{-# OPTIONS_GHC -fno-warn-incomplete-patterns #-}" , "" , "{-# LANGUAGE LambdaCase #-}" , "{-# LANGUAGE PatternGuards #-}" , "{-# LANGUAGE OverloadedStrings #-}" , "" , "module " ++ layName ++ " where" , "" , "import Prelude" , "import Data.Maybe ( fromMaybe, listToMaybe, mapMaybe )" , "import qualified Data.List as List" , "" , "import " ++ lexName , " ( Posn(..), Tok(..), Token(..), TokSymbol(..)" , " , prToken, tokenLineCol, tokenPos, tokenPosn" , " )" , "" , "-- local parameters" , "" , "data LayoutDelimiters" , " = LayoutDelimiters" , " { delimSep :: TokSymbol" , " , delimOpen :: Maybe TokSymbol -- ^ Nothing for toplevel layout." , " , delimClose :: Maybe TokSymbol -- ^ Nothing for toplevel layout." , " }" , "" , "layoutWords :: [(TokSymbol, LayoutDelimiters)]" , render $ prettyList 2 "layoutWords =" "[" "]" "," $ for lay $ \ (kw, delims) -> prettyList 0 empty "(" ")" "," $ map text [ show kw, show delims ] , "" , "layoutStopWords :: [TokSymbol]" , render $ prettyList 2 "layoutStopWords =" "[" "]" "," $ map (text . show) stop , "" , "-- layout separators" , "" , "layoutOpen, layoutClose, layoutSep :: [TokSymbol]" , "layoutOpen = List.nub $ mapMaybe (delimOpen . snd) layoutWords" , "layoutClose = List.nub $ mapMaybe (delimClose . snd) layoutWords" , unwords $ concat [ [ "layoutSep = List.nub $" ] , whenJust top $ \ sep -> [ show sep, ":" ] , [ "map (delimSep . snd) layoutWords" ] ] , "" , "parenOpen, parenClose :: [TokSymbol]" , render $ prettyList 2 "parenOpen =" "[" "]" "," $ map (text . show) parenOpen , render $ prettyList 2 "parenClose =" "[" "]" "," $ map (text . show) parenClose , "" , "-- | Report an error during layout resolution." , "layoutError" , " :: [Token] -- ^ Remaining tokens." , " -> String -- ^ Error message." , " -> a" , "layoutError ts msg" , " | null ts = error $ concat [ \"Layout error: \", msg, \".\" ]" , " | otherwise = error $ unlines" , " [ concat [ \"Layout error at \", tokenPos ts, \": \", msg, \".\" ]" , " , unwords $ concat" , " [ [ \"Remaining tokens:\" ]" , " , map prToken $ take 10 ts" , " , [ \"...\" | not $ null $ drop 10 ts ]" , " ]" , " ]" , "" , "-- | Replace layout syntax with explicit layout tokens." , "resolveLayout" , " :: Bool -- ^ Whether to use top-level layout." , " -> [Token] -- ^ Token stream before layout resolution." , " -> [Token] -- ^ Token stream after layout resolution." ] , caseMaybe topDelim -- No top-level layout [ "resolveLayout _topLayout = res Nothing [Explicit]" , " where" ] -- Can have top-level layout (\ delim -> [ "resolveLayout topLayout =" , " res Nothing [if topLayout then Implicit topDelim Definitive 1 else Explicit]" , " where" , " topDelim :: LayoutDelimiters" , " topDelim = " ++ show delim ]) , [ "" , " res :: Maybe Token -- ^ The previous token, if any." , " -> [Block] -- ^ A stack of layout blocks." , " -> [Token] -> [Token]" , "" , " -- The stack should never be empty." , " res _ [] ts = layoutError ts \"layout stack empty\"" , "" , " -- Handling explicit blocks:" , " res _ st (t0 : ts)" , " -- We found an open brace in the input," , " -- put an explicit layout block on the stack." , " -- This is done even if there was no layout word," , " -- to keep opening and closing braces." , " | isLayoutOpen t0 || isParenOpen t0" , " = t0 : res (Just t0) (Explicit : st) ts" , "" , " -- If we encounter a closing brace, exit the first explicit layout block." , " | isLayoutClose t0 || isParenClose t0" , " , let (imps, rest) = span isImplicit st" , " , let st' = drop 1 rest" , " = if null st'" , " then layoutError ts $ unwords" , " [ \"found\", prToken t0, \"at\" , tokenPos [t0]" , " , \"without an explicit layout block\"" , " ]" , " else map (closingToken ts (tokenPosn t0)) imps ++ t0 : res (Just t0) st' ts" , "" , " -- Ending or confirming implicit layout blocks:" , " res pt (b@(Implicit delim status col) : bs) (t0 : ts)" , " " , " -- Do not end top-level layout block by layout stop word." , " | isStop t0, col <= 1" , " = t0 : res (Just t0) (b : bs) ts" , "" , " -- End of implicit block by a layout stop word." , " | isStop t0" , " -- Exit the current block and all implicit blocks" , " -- more indented than the current token." , " , let (ebs, st') = span ((column t0 <) . indentation) bs" , " -- Insert block-closers after the previous token." , " = map (closingToken ts (afterPrev pt)) (b : ebs) ++ t0 : res (Just t0) st' ts" , "" , " -- End of an implicit layout block by dedentation." , " | newLine pt t0" , " , column t0 < col" , " -- Insert a block closer after the previous token." , " -- Repeat, with the current block removed from the stack." , " , let c = closingToken ts (afterPrev pt) b" , " = c : res (Just c) bs (t0 : ts)" , "" , " -- If we are on a newline, confirm the last tentative blocks." , " | newLine pt t0, Tentative{} <- status" , " = res pt (Implicit delim Definitive col : confirm col bs) (t0 : ts)" , "" , " -- Starting and processing implicit layout blocks:" , " res pt st (t0 : ts)" , " -- Start a new layout block if the first token is a layout word." , " | Just delim@(LayoutDelimiters _ mopen _) <- isLayout t0" , " = maybeInsertSeparator pt t0 st $" , " case ts of" , " -- Explicit layout, just move on. The next step" , " -- will push an explicit layout block." , " t1 : _ | isLayoutOpen t1 ->" , " t0 : res (Just t0) st ts" , " -- Otherwise, insert an open brace after the layout word" , " _ ->" , " t0 : b : res (Just b) (addImplicit delim (tokenPosn t0) pos st) ts" , " where" , " b = sToken (nextPos t0) $ fromMaybe undefined mopen" , " -- At the end of the file, the start column does not matter." , " -- So if there is no token t1 after t0, just use the position of t0." , " pos = tokenPosn $ fromMaybe t0 $ listToMaybe ts" , "" , " -- Insert separator if necessary." , " | otherwise" , " = maybeInsertSeparator pt t0 st $" , " t0 : res (Just t0) st ts" , "" , " -- At EOF: skip explicit blocks." , " res (Just _) [Explicit] [] = []" , " res (Just t) (Explicit : bs) [] = res (Just t) bs []" , "" , " -- If we are using top-level layout, insert a semicolon after" , " -- the last token, if there isn't one already" , " res (Just t) [Implicit (LayoutDelimiters sep _ _) _ _] []" , " | isLayoutSep t = []" , " | otherwise = [sToken (nextPos t) sep]" , "" , " -- At EOF in an implicit, non-top-level block: close the block" , " res (Just t) (Implicit (LayoutDelimiters _ _ (Just close)) _ _ : bs) []" , " = b : res (Just b) bs []" , " where b = sToken (nextPos t) close" , "" , " -- This should only happen if the input is empty." , " res Nothing _st []" , " = []" , "" , " -- | Insert a 'layoutSep' if we are on a new line on the current" , " -- implicit layout column." , " maybeInsertSeparator" , " :: Maybe Token -- ^ The previous token." , " -> Token -- ^ The current token." , " -> [Block] -- ^ The layout stack." , " -> [Token] -- ^ The result token stream." , " -> [Token] -- ^ Maybe prepended with a 'layoutSep'." , " maybeInsertSeparator pt t0 = \\case" , " Implicit (LayoutDelimiters sep _ _) _ n : _" , " | newLine pt t0" , " , column t0 == n" , " , maybe False (not . isTokenIn (layoutSep ++ layoutOpen)) pt" , " -- Insert a semicolon after the previous token" , " -- unless we are the beginning of the file," , " -- or the previous token is a semicolon or open brace." , " -> (sToken (afterPrev pt) sep :)" , " _ -> id" , "" , " closingToken :: [Token] -> Position -> Block -> Token" , " closingToken ts pos = sToken pos . \\case" , " Implicit (LayoutDelimiters _ _ (Just sy)) _ _ -> sy" , " _ -> layoutError ts \"trying to close a top level block\"" , "" , "type Position = Posn" , "type Line = Int" , "type Column = Int" , "" , "-- | Entry of the layout stack." , "data Block" , " = Implicit LayoutDelimiters Status Column" , " -- ^ An implicit layout block with its start column." , " | Explicit" , "" , "-- | Get current indentation. 0 if we are in an explicit block." , "indentation :: Block -> Column" , "indentation = \\case" , " Implicit _ _ n -> n" , " Explicit -> 0" , "" , "-- | Check if s block is implicit." , "isImplicit :: Block -> Bool" , "isImplicit = \\case" , " Implicit{} -> True" , " Explicit{} -> False" , "" , "data Status" , " = Tentative -- ^ A layout column that has not been confirmed by a line break" , " | Definitive -- ^ A layout column that has been confirmed by a line break." , "" , "-- | Add a new implicit layout block." , "addImplicit" , " :: LayoutDelimiters -- ^ Delimiters of the new block." , " -> Position -- ^ Position of the layout keyword." , " -> Position -- ^ Position of the token following the layout keword." , " -> [Block]" , " -> [Block]" , "addImplicit delim (Pn _ l0 _) (Pn _ l1 c1) st" , " -- Case: layout keyword was at the end of the line:" , " -- New implicit block is definitive." , " | l1 > l0 = Implicit delim Definitive (col st') : st'" , " -- Case: staying on the same line:" , " -- New implicit block is tentative." , " | otherwise = Implicit delim Tentative (col st) : st" , " where" , " st' = confirm c1 st" , " col bs = max c1 $ 1 + definiteIndentation bs" , " -- The column of the next token determines the starting column" , " -- of the implicit layout block." , " -- However, the next block needs to be strictly more indented" , " -- than the previous block." , "" , " -- | Get the current confirmed indentation level." , " definiteIndentation :: [Block] -> Int" , " definiteIndentation bs =" , " case dropWhile isTentative bs of" , " Implicit _ Definitive n : _ -> n" , " _ -> 0 -- 0 enables a first unindented block, see 194_layout/good05.in" , "" , " isTentative :: Block -> Bool" , " isTentative = \\case" , " Implicit _ Tentative _ -> True" , " _ -> False" , "" , "-- | Confirm tentative blocks that are not more indented than @col@." , "confirm :: Column -> [Block] -> [Block]" , "confirm c0 = loop" , " where" , " loop = \\case" , " Implicit delim Tentative c : bs" , " | c <= c0 -> Implicit delim Definitive c : loop bs" , " bs -> bs" , "" , "-- | Get the position immediately to the right of the given token." , "-- If no token is given, gets the first position in the file." , "afterPrev :: Maybe Token -> Position" , "afterPrev = maybe (Pn 0 1 1) nextPos" , "" , "-- | Get the position immediately to the right of the given token." , "nextPos :: Token -> Position" , "nextPos t = Pn (g + s) l (c + s + 1)" , " where" , " Pn g l c = tokenPosn t" , " s = tokenLength t" , "" , "-- | Get the number of characters in the token." , "tokenLength :: Token -> Int" , "tokenLength = length . prToken" , "" , "-- | Create a position symbol token." , "sToken :: Position -> TokSymbol -> Token" , "sToken p t = PT p $ TK t" , "" , "-- | Get the line number of a token." , "line :: Token -> Line" , "line = fst . tokenLineCol" , "" , "-- | Get the column number of a token." , "column :: Token -> Column" , "column = snd . tokenLineCol" , "" , "-- | Is the following token on a new line?" , "newLine :: Maybe Token -> Token -> Bool" , "newLine pt t0 = maybe True ((line t0 >) . line) pt" , "" , "-- | Check if a word is a layout start token." , "isLayout :: Token -> Maybe LayoutDelimiters" , "isLayout = \\case" , " PT _ (TK t) -> lookup t layoutWords" , " _ -> Nothing" , "" , "-- | Check if a token is one of the given symbols." , "isTokenIn :: [TokSymbol] -> Token -> Bool" , "isTokenIn ts = \\case" , " PT _ (TK t) -> t `elem` ts" , " _ -> False" , "" , "-- | Check if a token is a layout stop token." , "isStop :: Token -> Bool" , "isStop = isTokenIn layoutStopWords" , "" , "-- | Check if a token is the layout open token." , "isLayoutOpen :: Token -> Bool" , "isLayoutOpen = isTokenIn layoutOpen" , "" , "-- | Check if a token is the layout separator token." , "isLayoutSep :: Token -> Bool" , "isLayoutSep = isTokenIn layoutSep" , "" , "-- | Check if a token is the layout close token." , "isLayoutClose :: Token -> Bool" , "isLayoutClose = isTokenIn layoutClose" , "" , "-- | Check if a token is an opening parenthesis." , "isParenOpen :: Token -> Bool" , "isParenOpen = isTokenIn parenOpen" , "" , "-- | Check if a token is a closing parenthesis." , "isParenClose :: Token -> Bool" , "isParenClose = isTokenIn parenClose" ] ] where (top0, lay0, stop0) = layoutPragmas cf top = mkTokSymbol =<< top0 topDelim = fmap (\ sep -> LayoutDelimiters sep Nothing Nothing) top lay = for lay0 $ \ (kw, Delimiters sep open close) -> ( fromMaybe undefined $ mkTokSymbol kw , LayoutDelimiters (fromMaybe undefined $ mkTokSymbol sep) (mkTokSymbol open) (mkTokSymbol close) ) stop = mapMaybe mkTokSymbol stop0 mkTokSymbol :: String -> Maybe TokSymbol mkTokSymbol x = TokSymbol x <$> lookup x tokens tokens = cfTokens cf -- Extra parentheses to keep track of (#370). parenOpen = mapMaybe mkTokSymbol [ "(", "[" ] parenClose = mapMaybe mkTokSymbol [ ")", "]" ] BNFC-2.9.5/src/BNFC/Backend/Haskell/CFtoPrinter.hs0000644000000000000000000004277207346545000017410 0ustar0000000000000000{- BNF Converter: Pretty-printer generator Copyright (C) 2004 Author: Aarne Ranta -} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE ScopedTypeVariables #-} module BNFC.Backend.Haskell.CFtoPrinter (cf2Printer, compareRules) where import Prelude hiding ((<>)) import BNFC.Backend.Haskell.Utils import BNFC.CF import BNFC.Options (TokenText(..)) import BNFC.Utils import Data.Char (toLower) import Data.Either (lefts) import Data.Function (on) import qualified Data.List as List -- import Debug.Trace (trace) import Text.PrettyPrint -- AR 15/2/2002 type AbsMod = String -- | Derive pretty-printer from a BNF grammar. cf2Printer :: TokenText -- ^ Are identifiers @ByteString@s or @Text@ rather than @String@s? (Option @--bytestrings@ and @--text@) -> Bool -- ^ Option @--functor@? -> Bool -- ^ @--haskell-gadt@? -> String -- ^ Name of created Haskell module. -> AbsMod -- ^ Name of Haskell module for abstract syntax. -> CF -- ^ Grammar. -> Doc cf2Printer tokenText functor useGadt name absMod cf = vcat $ concat $ -- Each of the following list entries is itself a list of Docs [ prologue tokenText useGadt name [ absMod | importAbsMod ] cf , integerRule absMod cf , doubleRule absMod cf , when (hasIdent cf) $ identRule absMod tokenText cf , concat [ ownPrintRule absMod tokenText cf own | (own,_) <- tokenPragmas cf ] , rules absMod functor cf ] where importAbsMod = not (null $ cf2data cf) || not (null $ specialCats cf) -- | Lowercase Haskell identifiers imported from ''Prelude''. lowerCaseImports :: [String] lowerCaseImports = [ "all", "elem", "foldr", "id", "map", "null", "replicate", "shows", "span" ] prologue :: TokenText -> Bool -> String -> [AbsMod] -> CF -> [Doc] prologue tokenText useGadt name absMod cf = map text $ concat [ [ "{-# LANGUAGE CPP #-}" , "{-# LANGUAGE FlexibleInstances #-}" , "{-# LANGUAGE LambdaCase #-}" ] , [ "{-# LANGUAGE GADTs #-}" | useGadt ] , [ "#if __GLASGOW_HASKELL__ <= 708" , "{-# LANGUAGE OverlappingInstances #-}" , "#endif" ] , [ "" -- -- WAS: Needed for precedence category lists, e.g. @[Exp2]@: -- , "{-# OPTIONS_GHC -fno-warn-incomplete-patterns #-}" -- , "" , "-- | Pretty-printer for " ++ takeWhile ('.' /=) name ++ "." , "" , "module " ++ name +++ "where" , "" , "import Prelude" , " ( ($), (.)" , " , Bool(..), (==), (<)" , " , Int, Integer, Double, (+), (-), (*)" , " , String, (++)" , " , ShowS, showChar, showString" , " , " ++ List.intercalate ", " lowerCaseImports , " )" , "import Data.Char ( Char, isSpace )" ] , fmap ("import qualified " ++) absMod -- At most 1. (Unnecessary if Abs module is empty.) , when (hasTextualTokens cf) $ tokenTextImport tokenText , [ "" , "-- | The top-level printing method." , "" , "printTree :: Print a => a -> String" , "printTree = render . prt 0" , "" , "type Doc = [ShowS] -> [ShowS]" , "" , "doc :: ShowS -> Doc" , "doc = (:)" , "" , "render :: Doc -> String" , "render d = rend 0 False (map ($ \"\") $ d []) \"\"" , " where" , " rend" , " :: Int -- ^ Indentation level." , " -> Bool -- ^ Pending indentation to be output before next character?" , " -> [String]" , " -> ShowS" , " rend i p = \\case" , " \"[\" :ts -> char '[' . rend i False ts" , " \"(\" :ts -> char '(' . rend i False ts" , " \"{\" :ts -> onNewLine i p . showChar '{' . new (i+1) ts" , " \"}\" : \";\":ts -> onNewLine (i-1) p . showString \"};\" . new (i-1) ts" , " \"}\" :ts -> onNewLine (i-1) p . showChar '}' . new (i-1) ts" , " [\";\"] -> char ';'" , " \";\" :ts -> char ';' . new i ts" , " t : ts@(s:_) | closingOrPunctuation s" , " -> pending . showString t . rend i False ts" , " t :ts -> pending . space t . rend i False ts" , " [] -> id" , " where" , " -- Output character after pending indentation." , " char :: Char -> ShowS" , " char c = pending . showChar c" , "" , " -- Output pending indentation." , " pending :: ShowS" , " pending = if p then indent i else id" , "" , " -- Indentation (spaces) for given indentation level." , " indent :: Int -> ShowS" , " indent i = replicateS (2*i) (showChar ' ')" , "" , " -- Continue rendering in new line with new indentation." , " new :: Int -> [String] -> ShowS" , " new j ts = showChar '\\n' . rend j True ts" , "" , " -- Make sure we are on a fresh line." , " onNewLine :: Int -> Bool -> ShowS" , " onNewLine i p = (if p then id else showChar '\\n') . indent i" , "" , " -- Separate given string from following text by a space (if needed)." , " space :: String -> ShowS" , " space t s =" , " case (all isSpace t, null spc, null rest) of" , " (True , _ , True ) -> [] -- remove trailing space" , " (False, _ , True ) -> t -- remove trailing space" , " (False, True, False) -> t ++ ' ' : s -- add space if none" , " _ -> t ++ s" , " where" , " (spc, rest) = span isSpace s" , "" , " closingOrPunctuation :: String -> Bool" , " closingOrPunctuation [c] = c `elem` closerOrPunct" , " closingOrPunctuation _ = False" , "" , " closerOrPunct :: String" , " closerOrPunct = \")],;\"" , "" , "parenth :: Doc -> Doc" , "parenth ss = doc (showChar '(') . ss . doc (showChar ')')" , "" , "concatS :: [ShowS] -> ShowS" , "concatS = foldr (.) id" , "" , "concatD :: [Doc] -> Doc" , "concatD = foldr (.) id" , "" , "replicateS :: Int -> ShowS -> ShowS" , "replicateS n f = concatS (replicate n f)" , "" , "-- | The printer class does the job." , "" , "class Print a where" , " prt :: Int -> a -> Doc" , "" , "instance {-# OVERLAPPABLE #-} Print a => Print [a] where" , " prt i = concatD . map (prt i)" , "" , "instance Print Char where" , " prt _ c = doc (showChar '\\'' . mkEsc '\\'' c . showChar '\\'')" , "" ] , if haveListChar then [ "-- | No @instance 'Print' String@ because it would clash with the instance" , "-- for @[Char]@." ] else [ "instance Print String where" , " prt _ = printString" , "" ] , [ "printString :: String -> Doc" , "printString s = doc (showChar '\"' . concatS (map (mkEsc '\"') s) . showChar '\"')" , "" , "mkEsc :: Char -> Char -> ShowS" , "mkEsc q = \\case" , " s | s == q -> showChar '\\\\' . showChar s" , " '\\\\' -> showString \"\\\\\\\\\"" , " '\\n' -> showString \"\\\\n\"" , " '\\t' -> showString \"\\\\t\"" , " s -> showChar s" , "" , "prPrec :: Int -> Int -> Doc -> Doc" , "prPrec i j = if j < i then parenth else id" , "" ] ] where haveListChar = not $ null $ rulesForCat cf $ ListCat $ TokenCat "Char" -- | Printing instance for @Integer@, and possibly @[Integer]@. integerRule :: AbsMod -> CF -> [Doc] integerRule absMod cf = showsPrintRule absMod cf $ TokenCat catInteger -- | Printing instance for @Double@, and possibly @[Double]@. doubleRule :: AbsMod -> CF -> [Doc] doubleRule absMod cf = showsPrintRule absMod cf $ TokenCat catDouble showsPrintRule :: AbsMod -> CF -> Cat -> [Doc] showsPrintRule absMod _cf t = [ hsep [ "instance Print" , text (qualifiedCat absMod t) , "where" ] , " prt _ x = doc (shows x)" , "" ] -- | Print category (data type name) qualified if user-defined. -- qualifiedCat :: AbsMod -> Cat -> String qualifiedCat absMod t = case t of TokenCat s | s `elem` baseTokenCatNames -> unqualified | otherwise -> qualified Cat{} -> qualified ListCat c -> concat [ "[", qualifiedCat absMod c, "]" ] CoercCat{} -> impossible where unqualified = catToStr t qualified = qualify absMod unqualified impossible = error $ "impossible in Backend.Haskell.CFtoPrinter.qualifiedCat: " ++ catToStr t qualify :: AbsMod -> String -> String qualify absMod s = concat [ absMod, "." , s ] -- | Printing instance for @Ident@, and possibly @[Ident]@. identRule :: AbsMod -> TokenText -> CF -> [Doc] identRule absMod tokenText cf = ownPrintRule absMod tokenText cf catIdent -- | Printing identifiers and terminals. ownPrintRule :: AbsMod -> TokenText -> CF -> TokenCat -> [Doc] ownPrintRule absMod tokenText cf own = [ "instance Print" <+> q <+> "where" , " prt _ (" <> q <+> posn <> ") = doc $ showString" <+> text (tokenTextUnpack tokenText "i") ] where q = text $ qualifiedCat absMod $ TokenCat own posn = if isPositionCat cf own then "(_,i)" else "i" -- | Printing rules for the AST nodes. rules :: AbsMod -> Bool -> CF -> [Doc] rules absMod functor cf = do (cat, xs :: [ (Fun, [Cat]) ]) <- cf2dataLists cf concat $ [ case_fun absMod functor cf cat $ map (toArgs cat) xs , [ "" ] ] where toArgs :: Cat -> (Fun, [Cat]) -> Rule toArgs cat (cons, _) = case filter (\ (Rule f c _rhs _internal) -> cons == funName f && cat == normCat (wpThing c)) (cfgRules cf) of (r : _) -> r -- 2018-01-14: Currently, there can be overlapping rules like -- Foo. Bar ::= "foo" ; -- Foo. Bar ::= "bar" ; -- Of course, this will generate an arbitary printer for @Foo@. [] -> error $ "CFToPrinter.rules: no rhs found for: " ++ cons ++ ". " ++ catToStr cat ++ " ::= ?" -- | -- >>> vcat $ case_fun "Abs" False undefined (Cat "A") [ (npRule "AA" (Cat "AB") [Right "xxx"]) Parsable ] -- instance Print Abs.A where -- prt i = \case -- Abs.AA -> prPrec i 0 (concatD [doc (showString "xxx")]) case_fun :: AbsMod -> Bool -> CF -> Cat -> [Rule] -> [Doc] case_fun absMod functor cf cat rules = -- trace ("case_fun: cat = " ++ catToStr cat) $ -- trace ("case_fun: rules = " ++ show rules ) $ [ "instance Print" <+> type_ <+> "where" , nest 2 $ vcat $ -- Special printing of lists (precedence changes concrete syntax!) if isList cat then listCases $ List.sortBy compareRules $ rulesForNormalizedCat cf cat -- Ordinary category else [ "prt i = \\case" , nest 2 $ vcat $ map (mkPrintCase absMod functor) rules ] ] where type_ | functor = case cat of ListCat{} -> type' cat _ -> parens $ type' cat | otherwise = text (qualifiedCat absMod cat) type' = \case ListCat c -> "[" <> type' c <> "]" c@TokenCat{} -> text (qualifiedCat absMod c) c -> text (qualifiedCat absMod c) <> "' a" listCases [] = [] listCases rules = concat [ [ "prt _ [] = concatD []" | not $ any isNilFun rules ] -- Andreas, 2021-09-22, issue #386 -- If the list is @nonempty@ according to the grammar, still add a nil case. -- In the AST it is simply a list, and the AST could be created -- by other means than by parsing. , map (mkPrtListCase minPrec) rules ] where -- Andreas, 2021-09-22, issue #384: -- The minimum precedence of a rule lhs category in the rules set. -- This is considered the default precedence; used to make the printing function total. minPrec = minimum $ map precRule rules -- | When writing the Print instance for a category (in case_fun), we have -- a different case for each constructor for this category. -- -- >>> mkPrintCase "Abs" False (npRule "AA" (Cat "A") [Right "xxx"] Parsable) -- Abs.AA -> prPrec i 0 (concatD [doc (showString "xxx")]) -- -- Coercion levels are passed to @prPrec@. -- -- >>> mkPrintCase "Abs" False (npRule "EInt" (CoercCat "Expr" 2) [Left (TokenCat "Integer")] Parsable) -- Abs.EInt n -> prPrec i 2 (concatD [prt 0 n]) -- -- >>> mkPrintCase "Abs" False (npRule "EPlus" (CoercCat "Expr" 1) [Left (Cat "Expr"), Right "+", Left (Cat "Expr")] Parsable) -- Abs.EPlus expr1 expr2 -> prPrec i 1 (concatD [prt 0 expr1, doc (showString "+"), prt 0 expr2]) -- -- If the AST is a functor, ignore first argument. -- -- >>> mkPrintCase "Abs" True (npRule "EInt" (CoercCat "Expr" 2) [Left (TokenCat "Integer")] Parsable) -- Abs.EInt _ n -> prPrec i 2 (concatD [prt 0 n]) -- -- Skip internal categories. -- -- >>> mkPrintCase "Abs" True $ npRule "EInternal" (Cat "Expr") [Left (Cat "Expr")] Internal -- Abs.EInternal _ expr -> prPrec i 0 (concatD [prt 0 expr]) -- mkPrintCase :: AbsMod -> Bool -> Rule -> Doc mkPrintCase absMod functor (Rule f cat rhs _internal) = pat <+> "->" <+> "prPrec i" <+> integer (precCat $ wpThing cat) <+> parens (mkRhs (map render variables) rhs) where pat :: Doc pat | isNilFun f = text "[]" | isOneFun f = text "[" <+> head variables <+> "]" | isConsFun f = hsep $ List.intersperse (text ":") variables | otherwise = text (qualify absMod $ funName f) <+> (if functor then "_" else empty) <+> hsep variables -- Creating variables names used in pattern matching. In addition to -- haskell's reserved words, `e` and `i` are used in the printing function -- and should be avoided. -- #337: `prt` as well, and some more entirely lowercase ones. avoid = concat [ [ "e", "i", "doc", "prt" ] -- don't need mixed-case ones: "concatD", "prPrec", "showString" , lowerCaseImports , hsReservedWords ] names = map var (lefts rhs) variables :: [Doc] variables = map text $ mkNames avoid LowerCase names var (ListCat c) = var c ++ "s" var (TokenCat "Ident") = "id" var (TokenCat "Integer") = "n" var (TokenCat "String") = "str" var (TokenCat "Char") = "c" var (TokenCat "Double") = "d" var xs = map toLower $ catToStr xs -- | Pattern match on the list constructor and the coercion level -- -- >>> mkPrtListCase 0 (npRule "[]" (ListCat (Cat "Foo")) [] Parsable) -- prt _ [] = concatD [] -- -- >>> mkPrtListCase 0 (npRule "(:[])" (ListCat (Cat "Foo")) [Left (Cat "FOO")] Parsable) -- prt _ [x] = concatD [prt 0 x] -- -- >>> mkPrtListCase 0 (npRule "(:)" (ListCat (Cat "Foo")) [Left (Cat "Foo"), Left (ListCat (Cat "Foo"))] Parsable) -- prt _ (x:xs) = concatD [prt 0 x, prt 0 xs] -- -- >>> mkPrtListCase 0 (npRule "[]" (ListCat (CoercCat "Foo" 2)) [] Parsable) -- prt 2 [] = concatD [] -- -- >>> mkPrtListCase 0 (npRule "(:[])" (ListCat (CoercCat "Foo" 2)) [Left (CoercCat "Foo" 2)] Parsable) -- prt 2 [x] = concatD [prt 2 x] -- -- >>> mkPrtListCase 2 (npRule "(:)" (ListCat (CoercCat "Foo" 2)) [Left (CoercCat "Foo" 2), Left (ListCat (CoercCat "Foo" 2))] Parsable) -- prt _ (x:xs) = concatD [prt 2 x, prt 2 xs] -- mkPrtListCase :: Integer -- ^ The lowest precedence of a lhs in a list rule. Default: 0. -> Rule -- ^ The list rule. -> Doc mkPrtListCase minPrec (Rule f (WithPosition _ (ListCat c)) rhs _internal) | isNilFun f = "prt" <+> precPattern <+> "[]" <+> "=" <+> body | isOneFun f = "prt" <+> precPattern <+> "[x]" <+> "=" <+> body | isConsFun f = "prt" <+> precPattern <+> "(x:xs)" <+> "=" <+> body | otherwise = empty -- (++) constructor where precPattern = if p <= minPrec then "_" else integer p p = precCat c body = mkRhs ["x", "xs"] rhs mkPrtListCase _ _ = error "mkPrtListCase undefined for non-list categories" -- | Define an ordering on lists' rules with the following properties: -- -- - rules with a higher coercion level should come first, i.e. the rules for -- [Foo3] are before rules for [Foo1] and they are both lower than rules for -- [Foo]. -- -- - [] < [_] < _:_ -- -- This is desiged to correctly order the rules in the prt function for lists so that -- the pattern matching works as expectd. -- -- >>> compareRules (npRule "[]" (ListCat (CoercCat "Foo" 3)) [] Parsable) (npRule "[]" (ListCat (CoercCat "Foo" 1)) [] Parsable) -- LT -- -- >>> compareRules (npRule "[]" (ListCat (CoercCat "Foo" 3)) [] Parsable) (npRule "[]" (ListCat (Cat "Foo")) [] Parsable) -- LT -- -- >>> compareRules (npRule "[]" (ListCat (Cat "Foo")) [] Parsable) (npRule "(:[])" (ListCat (Cat "Foo")) [] Parsable) -- LT -- -- >>> compareRules (npRule "(:[])" (ListCat (Cat "Foo")) [] Parsable) (npRule "(:)" (ListCat (Cat "Foo")) [] Parsable) -- LT -- compareRules :: IsFun f => Rul f -> Rul f -> Ordering compareRules r1 r2 | precRule r1 > precRule r2 = LT | precRule r1 < precRule r2 = GT | otherwise = (compareFunNames `on` (funName . funRule)) r1 r2 compareFunNames :: String -> String -> Ordering compareFunNames = curry $ \case ("[]" , "[]" ) -> EQ ("[]" , _ ) -> LT ("(:[])" , "[]" ) -> GT ("(:[])" , "(:[])") -> EQ ("(:[])" , "(:)" ) -> LT ("(:)" , "(:)" ) -> EQ ("(:)" , _ ) -> GT (_ , _ ) -> EQ -- | -- -- >>> mkRhs ["expr1", "n", "expr2"] [Left (Cat "Expr"), Right "-", Left (TokenCat "Integer"), Left (Cat "Expr")] -- concatD [prt 0 expr1, doc (showString "-"), prt 0 n, prt 0 expr2] -- -- Coercions on the right hand side should be passed to prt: -- -- >>> mkRhs ["expr1"] [Left (CoercCat "Expr" 2)] -- concatD [prt 2 expr1] -- -- >>> mkRhs ["expr2s"] [Left (ListCat (CoercCat "Expr" 2))] -- concatD [prt 2 expr2s] -- mkRhs :: [String] -> [Either Cat String] -> Doc mkRhs args its = "concatD" <+> brackets (hsep (punctuate "," (mk args its))) where mk (arg:args) (Left c : items) = (prt c <+> text arg) : mk args items mk args (Right s : items) = ("doc (showString" <+> text (show s) <> ")") : mk args items mk _ _ = [] prt (TokenCat "String") = "printString" prt c = "prt" <+> integer (precCat c) BNFC-2.9.5/src/BNFC/Backend/Haskell/CFtoTemplate.hs0000644000000000000000000000524707346545000017534 0ustar0000000000000000{- BNF Converter: Template Generator Copyright (C) 2004 Author: Markus Forsberg -} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Haskell.CFtoTemplate (cf2Template) where import Prelude hiding ((<>)) import BNFC.CF import BNFC.PrettyPrint import BNFC.Utils ( ModuleName ) import BNFC.Backend.Haskell.Utils ( catvars, noWarnUnusedMatches ) cf2Template :: ModuleName -> ModuleName -> Bool -> CF -> String cf2Template skelName absName functor cf = unlines $ concat [ [ "-- Templates for pattern matching on abstract syntax" , "" , noWarnUnusedMatches , "" , "module "++ skelName ++ " where" , "" , "import Prelude (($), Either(..), String, (++), Show, show)" ] , [ "import qualified " ++ absName | importAbsMod ] , [ "" , "type Err = Either String" , "type Result = Err String" , "" , "failure :: Show a => a -> Result" , "failure x = Left $ \"Undefined case: \" ++ show x" , "" , render . vsep $ map (uncurry (case_fun absName functor)) datas ] ] where datas = specialData cf ++ cf2data cf importAbsMod = not $ null datas -- | -- >>> case_fun "M" False (Cat "Expr") [("EInt", [TokenCat "Integer"]), ("EAdd", [Cat "Expr", Cat "Expr"])] -- transExpr :: M.Expr -> Result -- transExpr x = case x of -- M.EInt integer -> failure x -- M.EAdd expr1 expr2 -> failure x -- -- >>> case_fun "" True (Cat "Expr") [("EInt", [TokenCat "Integer"]), ("EAdd", [Cat "Expr", Cat "Expr"])] -- transExpr :: Show a => Expr' a -> Result -- transExpr x = case x of -- EInt _ integer -> failure x -- EAdd _ expr1 expr2 -> failure x -- -- TokenCat are not generated as functors: -- >>> case_fun "" True (TokenCat "MyIdent") [("MyIdent", [TokenCat "String"])] -- transMyIdent :: MyIdent -> Result -- transMyIdent x = case x of -- MyIdent string -> failure x case_fun :: ModuleName -> Bool -> Cat -> [(Fun,[Cat])] -> Doc case_fun absName functor' cat xs = vcat [ fname <+> "::" <+> iffunctor "Show a =>" <+> type_ <+> "-> Result" , fname <+> "x = case x of" , nest 2 $ vcat (map mkOne xs) ] where -- If the functor option is set AND the category is not a token type, -- then the type is a functor. iffunctor doc | functor' && not (isTokenCat cat) = doc | otherwise = empty type_ = qualify $ cat' <> iffunctor "' a" fname = "trans" <> cat' cat' = pretty cat mkOne (cons, args) = let ns = catvars [render fname] args -- names False (map (checkRes .var) args) 1 in qualify (text cons) <+> iffunctor "_" <+> hsep ns <+> "-> failure x" qualify :: Doc -> Doc qualify | null absName = id | otherwise = (text absName <> "." <>) BNFC-2.9.5/src/BNFC/Backend/Haskell/HsOpts.hs0000644000000000000000000001145607346545000016424 0ustar0000000000000000module BNFC.Backend.Haskell.HsOpts where import BNFC.Utils import BNFC.Options import System.FilePath (pathSeparator, (<.>)) import Data.List (intercalate) import Data.Maybe (catMaybes) type Options = SharedOptions absFile, absFileM, alexFile, alexFileHs, alexFileM, composOpFile, composOpFileM, happyFile, happyFileHs, happyFileM, txtFile, errFile, errFileM, templateFile, templateFileM, printerFile, printerFileM, layoutFile, layoutFileM, xmlFile, xmlFileM, tFile, tFileExe :: Options -> String absFile = mkFile withLang "Abs" "hs" absFileM = mkMod withLang "Abs" alexFile = mkFile withLang "Lex" "x" alexFileHs = mkFile withLang "Lex" "hs" alexFileM = mkMod withLang "Lex" happyFile = mkFile withLang "Par" "y" happyFileHs = mkFile withLang "Par" "hs" happyFileM = mkMod withLang "Par" txtFile = mkFile withLang "Doc" "txt" templateFile = mkFile withLang "Skel" "hs" templateFileM = mkMod withLang "Skel" printerFile = mkFile withLang "Print" "hs" printerFileM = mkMod withLang "Print" tFile = mkFile withLang "Test" "hs" tFileExe = mkFile withLang "Test" "" errFile = mkFile noLang "ErrM" "hs" errFileM = mkMod noLang "ErrM" layoutFileM = mkMod withLang "Layout" layoutFile = mkFile withLang "Layout" "hs" xmlFile = mkFile withLang "XML" "hs" xmlFileM = mkMod withLang "XML" composOpFile = mkFile noLang "ComposOp" "hs" composOpFileM = mkMod noLang "ComposOp" -- Files created by the Agda backend agdaASTFile , agdaASTFileM , agdaParserFile , agdaParserFileM , agdaLibFile , agdaLibFileM , agdaMainFile , agdaMainFileM :: Options -> String agdaASTFile = mkFile withLang "AST" "agda" agdaASTFileM = mkMod withLang "AST" agdaParserFile = mkFile withLang "Parser" "agda" agdaParserFileM = mkMod withLang "Parser" agdaLibFile = mkFile noLang "IOLib" "agda" agdaLibFileM = mkMod noLang "IOLib" agdaMainFile = mkFile noLang "Main" "agda" agdaMainFileM = mkMod noLang "Main" noLang :: Options -> String -> String noLang _ name = name withLang :: Options -> String -> String withLang opts name = name ++ mkName [] CamelCase (lang opts) pkgToDir :: String -> FilePath pkgToDir s = replace '.' pathSeparator s -- | -- >>> mkMod withLang "Abstract" defaultOptions { lang = "abc" } -- "AbstractAbc" -- >>> mkMod noLang "Abstract" defaultOptions { lang = "abc" } -- "Abstract" -- >>> mkMod withLang "Abstract" defaultOptions { lang = "abc", inPackage = Just "A.B.C" } -- "A.B.C.AbstractAbc" -- >>> mkMod withLang "Abstract" defaultOptions { lang = "abc", inDir = True } -- "Abc.Abstract" -- >>> mkMod withLang "Abstract" defaultOptions { lang = "abc", inDir = True, inPackage = Just "A.B.C" } -- "A.B.C.Abc.Abstract" mkMod :: (Options -> String -> String) -> String -> Options -> String mkMod addLang name opts = mkNamespace opts <.> mod where [] <.> s = s s1 <.> s2 = s1 ++ "." ++ s2 mod | inDir opts = name | otherwise = addLang opts name -- | -- >>> mkFile withLang "Abstract" "hs" defaultOptions { lang = "abc" } -- "AbstractAbc.hs" -- >>> mkFile noLang "Abstract" "hs" defaultOptions { lang = "abc" } -- "Abstract.hs" -- >>> mkFile withLang "Abstract" "" defaultOptions { lang = "abc" } -- "AbstractAbc" -- >>> mkFile noLang "Abstract" "" defaultOptions { lang = "abc" } -- "Abstract" -- >>> mkFile withLang "Abstract" "hs" defaultOptions { lang = "abc", inDir = True } -- "Abc/Abstract.hs" -- >>> mkFile withLang "Abstract" "hs" defaultOptions { lang = "abc", inDir = True, inPackage = Just "A.B.C" } -- "A/B/C/Abc/Abstract.hs" mkFile :: (Options -> String -> String) -> String -> String -> Options -> FilePath mkFile addLang name ext opts = pkgToDir (mkMod addLang name opts) <.> ext -- | Determine the modules' namespace -- -- >>> mkNamespace defaultOptions -- "" -- >>> mkNamespace defaultOptions { lang = "Bla", inDir = True } -- "Bla" -- >>> mkNamespace defaultOptions { inPackage = Just "My.Cool.Package" } -- "My.Cool.Package" -- >>> mkNamespace defaultOptions { lang = "bla_bla", inDir = True } -- "BlaBla" -- >>> mkNamespace defaultOptions { lang = "bla", inDir = True, inPackage = Just "P"} -- "P.Bla" mkNamespace :: Options -> FilePath mkNamespace opts = intercalate "." $ catMaybes [inPackage opts, dir] where dir | inDir opts = Just (mkName [] CamelCase (lang opts)) | otherwise = Nothing -- | Determine the directory corresponding to the modules' namespace -- -- >>> codeDir defaultOptions -- "" -- >>> codeDir defaultOptions { lang = "Bla", inDir = True } -- "Bla" -- >>> codeDir defaultOptions { inPackage = Just "My.Cool.Package" } -- "My/Cool/Package" -- >>> codeDir defaultOptions { lang = "bla_bla", inDir = True } -- "BlaBla" -- >>> codeDir defaultOptions { lang = "bla", inDir = True, inPackage = Just "P"} -- "P/Bla" codeDir :: Options -> FilePath codeDir = pkgToDir . mkNamespace BNFC-2.9.5/src/BNFC/Backend/Haskell/MkErrM.hs0000644000000000000000000000617407346545000016342 0ustar0000000000000000{- BNF Converter: Haskell error monad Copyright (C) 2004-2007 Author: Markus Forsberg, Peter Gammie, Aarne Ranta, Björn Bringert Copyright (C) 2019 Author: Andreas Abel -} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Haskell.MkErrM where import BNFC.PrettyPrint mkErrM :: String -> Doc mkErrM errMod = vcat [ "{-# LANGUAGE CPP #-}" , "" , "#if __GLASGOW_HASKELL__ >= 708" , "---------------------------------------------------------------------------" , "-- Pattern synonyms exist since ghc 7.8." , "" , "-- | BNF Converter: Error Monad." , "--" , "-- Module for backwards compatibility." , "--" , "-- The generated parser now uses @'Either' String@ as error monad." , "-- This module defines a type synonym 'Err' and pattern synonyms" , "-- 'Bad' and 'Ok' for 'Left' and 'Right'." , "" , "{-# LANGUAGE PatternSynonyms #-}" , "{-# LANGUAGE FlexibleInstances #-}" , "" , "module" <+> text errMod <+> "where" , "" , "import Prelude (id, const, Either(..), String)" , "" , "import Control.Monad (MonadPlus(..))" , "import Control.Applicative (Alternative(..))" , "#if __GLASGOW_HASKELL__ >= 808" , "import Control.Monad (MonadFail(..))" , "#endif" , "" , "-- | Error monad with 'String' error messages." , "type Err = Either String" , "" , "pattern Bad msg = Left msg" , "pattern Ok a = Right a" , "" , "#if __GLASGOW_HASKELL__ >= 808" , "instance MonadFail Err where" , " fail = Bad" , "#endif" , "" , "instance Alternative Err where" , " empty = Left \"Err.empty\"" , " (<|>) Left{} = id" , " (<|>) x@Right{} = const x" , "" , "instance MonadPlus Err where" , " mzero = empty" , " mplus = (<|>)" , "" , "#else" , "---------------------------------------------------------------------------" , "-- ghc 7.6 and before: use old definition as data type." , "" , "-- | BNF Converter: Error Monad" , "" , "-- Copyright (C) 2004 Author: Aarne Ranta" , "-- This file comes with NO WARRANTY and may be used FOR ANY PURPOSE." , "" , "module" <+> text errMod <+> "where" , "" , "-- the Error monad: like Maybe type with error msgs" , "" , "import Control.Applicative (Applicative(..), Alternative(..))" , "import Control.Monad (MonadPlus(..), liftM)" , "" , "data Err a = Ok a | Bad String" , " deriving (Read, Show, Eq, Ord)" , "" , "instance Monad Err where" , " return = Ok" , " Ok a >>= f = f a" , " Bad s >>= _ = Bad s" , "" , "instance Applicative Err where" , " pure = Ok" , " (Bad s) <*> _ = Bad s" , " (Ok f) <*> o = liftM f o" , "" , "instance Functor Err where" , " fmap = liftM" , "" , "instance MonadPlus Err where" , " mzero = Bad \"Err.mzero\"" , " mplus (Bad _) y = y" , " mplus x _ = x" , "" , "instance Alternative Err where" , " empty = mzero" , " (<|>) = mplus" , "" , "#endif" ] BNFC-2.9.5/src/BNFC/Backend/Haskell/Utils.hs0000644000000000000000000001542007346545000016277 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Haskell.Utils ( comment, commentWithEmacsModeHint , posType, posConstr, noPosConstr , hasPositionClass, hasPositionMethod , noWarnUnusedMatches , parserName , hsReservedWords, avoidReservedWords, mkDefName , typeToHaskell, typeToHaskell' , catToType , catToVar, catvars , tokenTextImport, tokenTextType , tokenTextPack, tokenTextPackParens, tokenTextUnpack ) where import Data.Char import Data.String (IsString) import BNFC.PrettyPrint import qualified BNFC.PrettyPrint as P import BNFC.CF (Cat(..), catToStr, identCat, baseTokenCatNames, Base, Type(FunT), IsFun(..)) import BNFC.Options (TokenText(..)) import BNFC.Utils (mkNames, NameStyle(..)) -- | Haskell line comments. comment :: String -> String comment = ("-- " ++) -- | Haskell line comment including mode hint for emacs. commentWithEmacsModeHint :: String -> String commentWithEmacsModeHint = comment . ("-*- haskell -*- " ++) -- * GHC pragmas noWarnUnusedMatches :: IsString a => a noWarnUnusedMatches = "{-# OPTIONS_GHC -fno-warn-unused-matches #-}" -- ALT: only from GHC 8 -- "{-# OPTIONS_GHC -Wno-unused-matches #-}" -- * Names for position data type. posType, posConstr, noPosConstr :: IsString a => a posType = "BNFC'Position" posConstr = "BNFC'Position" noPosConstr = "BNFC'NoPosition" -- * The @HasPosition@ class for position-carrying abstract syntax. hasPositionClass, hasPositionMethod :: IsString a => a hasPositionClass = "HasPosition" hasPositionMethod = "hasPosition" -- * Parameterization by 'TokenText'. tokenTextImport :: TokenText -> [String] tokenTextImport = \case StringToken -> [] ByteStringToken -> [ "import qualified Data.ByteString.Char8 as BS" ] TextToken -> [ "import qualified Data.Text" ] tokenTextType :: TokenText -> String tokenTextType = \case StringToken -> "String" ByteStringToken -> "BS.ByteString" TextToken -> "Data.Text.Text" tokenTextPack :: TokenText -> String -> String tokenTextPack = \case StringToken -> id ByteStringToken -> ("BS.pack " ++) TextToken -> ("Data.Text.pack " ++) tokenTextPackParens :: TokenText -> String -> String tokenTextPackParens = \case StringToken -> id ByteStringToken -> parens . ("BS.pack " ++) TextToken -> parens . ("Data.Text.pack " ++) where parens :: String -> String parens s = "(" ++ s ++ ")" tokenTextUnpack :: TokenText -> String -> String tokenTextUnpack t s = case t of StringToken -> s ByteStringToken -> "(BS.unpack " ++ s ++ ")" TextToken -> "(Data.Text.unpack " ++ s ++ ")" -- * Other Utililites -- | Create a valid parser function name for a given category. -- -- >>> parserName (Cat "Abcd") -- pAbcd -- -- >>> parserName (ListCat (Cat "Xyz")) -- pListXyz -- parserName :: Cat -> Doc parserName = ("p" P.<>) . text . identCat -- | Haskell's reserved words. -- hsReservedWords :: [String] hsReservedWords = [ "as" , "case" , "class" , "data" , "default" , "deriving" , "do" , "else" , "family" , "forall" , "foreign" , "hiding" , "if" , "import" , "in" , "infix" , "infixl" , "infixr" , "instance" , "let" , "mdo" , "module" , "newtype" , "of" , "pattern" , "proc" , "qualified" , "rec" , "then" , "type" , "where" ] -- | Avoid Haskell keywords plus additional reserved words. avoidReservedWords :: [String] -> String -> String avoidReservedWords additionalReserved x | x `elem` reserved = x ++ "'" | otherwise = x where reserved = additionalReserved ++ hsReservedWords -- | Modifier to avoid clashes in definition. mkDefName :: IsFun f => f -> String mkDefName = avoidReservedWords [] . funName -- | Render a category from the grammar to a Haskell type. -- -- >>> catToType id empty (Cat "A") -- A -- >>> catToType id empty (ListCat (Cat "A")) -- [A] -- >>> catToType ("Foo." P.<>) empty (TokenCat "Ident") -- Foo.Ident -- -- Note that there is no haskell type for coerced categories: they should be normalized: -- >>> catToType id empty (CoercCat "Expr" 2) -- Expr -- -- If a type parameter is given it is added to the type name: -- >>> catToType id (text "a") (Cat "A") -- (A a) -- -- >>> catToType id (text "a") (ListCat (Cat "A")) -- [A a] -- -- but not added to Token categories: -- >>> catToType ("Foo." P.<>) (text "a") (TokenCat "Integer") -- Integer -- -- >>> catToType id (text "a") (ListCat (TokenCat "Integer")) -- [Integer] -- -- >>> catToType id empty (ListCat (CoercCat "Exp" 2)) -- [Exp] -- -- >>> catToType ("Foo." P.<>) (text "()") (ListCat (CoercCat "Exp" 2)) -- [Foo.Exp ()] -- catToType :: (Doc -> Doc) -> Doc -> Cat -> Doc catToType qualify param cat = parensIf isApp $ loop cat where isApp = case cat of Cat _ -> not $ isEmpty param _ -> False loop = \case ListCat c -> brackets $ loop c Cat c -> qualify (text c) <+> param -- note: <+> goes away if param==empty CoercCat c _ -> qualify (text c) <+> param TokenCat c | c `elem` baseTokenCatNames -> text c | otherwise -> qualify (text c) -- | Convert a base type to Haskell syntax. baseTypeToHaskell :: Base -> String baseTypeToHaskell = show -- | Convert a function type to Haskell syntax in curried form. typeToHaskell :: Type -> String typeToHaskell = typeToHaskell' "->" typeToHaskell' :: String -> Type -> String typeToHaskell' arr (FunT ts t) = foldr f (baseTypeToHaskell t) $ map baseTypeToHaskell ts where f a b = unwords [a, arr, b] -- | Make a variable name for a category. catToVar :: [String] -> Cat -> String catToVar rs = avoidReservedWords rs . var where var (ListCat cat) = var cat ++ "s" var (Cat "Ident") = "x" var (Cat "Integer") = "n" var (Cat "String") = "str" var (Cat "Char") = "c" var (Cat "Double") = "d" var xs = map toLower $ catToStr xs -- | Gives a list of variables usable for pattern matching. -- -- Example: Given the rule @Aba. S ::= A B A ;@ with the generated data type -- @ -- data S = Aba A B A -- @ -- from the list of categories on the RHS of the rule [A,B,A], we generate the -- list [a1,b,a2] to be used in a pattern matching like -- @ -- case s of -- Aba a1 b a2 -> ... -- ... -- @ -- -- >>> catvars [] [Cat "A", Cat "B", Cat "A"] -- [a1,b,a2] -- -- It should avoid reserved words: -- >>> catvars ["foo"] [Cat "Foo", Cat "IF", Cat "Case", Cat "Type", Cat "If"] -- [foo_,if_1,case_,type_,if_2] -- -- It uses a suffix -s to mark lists: -- >>> catvars [] [Cat "A", ListCat (Cat "A"), ListCat (ListCat (Cat "A"))] -- [a,as_,ass] -- catvars :: [String] -> [Cat] -> [Doc] catvars rs = map text . mkNames (rs ++ hsReservedWords) LowerCase . map var where var (ListCat c) = var c ++ "s" var c = catToStr c BNFC-2.9.5/src/BNFC/Backend/HaskellGADT.hs0000644000000000000000000000727207346545000015645 0ustar0000000000000000{- BNF Converter: Haskell main file Copyright (C) 2004-2005 Author: Markus Forsberg, Peter Gammie, Aarne Ranta, Björn Bringert -} module BNFC.Backend.HaskellGADT (makeHaskellGadt) where -- import Utils import BNFC.Options import BNFC.Backend.Base hiding (Backend) import BNFC.Backend.Haskell.HsOpts import BNFC.Backend.Haskell.Utils (comment, commentWithEmacsModeHint) import BNFC.CF import BNFC.Backend.Haskell.CFtoHappy import BNFC.Backend.Haskell.CFtoAlex3 import BNFC.Backend.HaskellGADT.CFtoAbstractGADT import BNFC.Backend.HaskellGADT.CFtoTemplateGADT import BNFC.Backend.Haskell.CFtoPrinter import BNFC.Backend.Haskell.CFtoLayout import BNFC.Backend.XML (makeXML) import BNFC.Backend.Haskell.MkErrM import qualified BNFC.Backend.Common.Makefile as Makefile import qualified BNFC.Backend.Haskell as Haskell import Control.Monad(when) makeHaskellGadt :: SharedOptions -> CF -> MkFiles () makeHaskellGadt opts cf = do let absMod = absFileM opts composOpMod = composOpFileM opts lexMod = alexFileM opts parMod = happyFileM opts prMod = printerFileM opts layMod = layoutFileM opts errMod = errFileM opts do mkHsFile (absFile opts) $ cf2Abstract (tokenText opts) absMod cf composOpMod mkHsFile (composOpFile opts) $ composOp composOpMod mkHsFileHint (alexFile opts) $ cf2alex3 lexMod (tokenText opts) cf liftIO $ putStrLn " (Use Alex 3 to compile.)" mkHsFileHint (happyFile opts) $ cf2Happy parMod absMod lexMod (glr opts) (tokenText opts) False cf liftIO $ putStrLn " (Tested with Happy 1.15 - 1.20)" mkHsFile (templateFile opts) $ cf2Template (templateFileM opts) absMod cf mkHsFile (printerFile opts) $ cf2Printer StringToken False True prMod absMod cf when (hasLayout cf) $ mkHsFile (layoutFile opts) $ cf2Layout layMod lexMod cf mkHsFile (tFile opts) $ Haskell.testfile opts cf mkHsFile (errFile opts) $ mkErrM errMod Makefile.mkMakefile (optMake opts) $ Haskell.makefile opts cf case xml opts of 2 -> makeXML opts True cf 1 -> makeXML opts False cf _ -> return () where mkHsFile x = mkfile x comment mkHsFileHint x = mkfile x commentWithEmacsModeHint composOp :: String -> String composOp composOpMod = unlines [ "{-# LANGUAGE Rank2Types, PolyKinds #-}", "module " ++ composOpMod ++ " (Compos(..),composOp,composOpM,composOpM_,composOpMonoid,", " composOpMPlus,composOpFold) where", "", "import Prelude", "", "import Control.Monad", "import Data.Functor.Identity", "", "class Compos t where", " compos :: (forall a. a -> m a) -> (forall a b. m (a -> b) -> m a -> m b)", " -> (forall a. t a -> m (t a)) -> t c -> m (t c)", "", "composOp :: Compos t => (forall a. t a -> t a) -> t c -> t c", "composOp f = runIdentity . composOpM (Identity . f)", "", "composOpM :: (Compos t, Monad m) => (forall a. t a -> m (t a)) -> t c -> m (t c)", "composOpM = compos return ap", "", "composOpM_ :: (Compos t, Monad m) => (forall a. t a -> m ()) -> t c -> m ()", "composOpM_ = composOpFold (return ()) (>>)", "", "composOpMonoid :: (Compos t, Monoid m) => (forall a. t a -> m) -> t c -> m", "composOpMonoid = composOpFold mempty mappend", "", "composOpMPlus :: (Compos t, MonadPlus m) => (forall a. t a -> m b) -> t c -> m b", "composOpMPlus = composOpFold mzero mplus", "", "composOpFold :: Compos t => b -> (b -> b -> b) -> (forall a. t a -> b) -> t c -> b", "composOpFold z c f = unC . compos (\\_ -> C z) (\\(C x) (C y) -> C (c x y)) (C . f)", "", "newtype C b a = C { unC :: b }" ] BNFC-2.9.5/src/BNFC/Backend/HaskellGADT/0000755000000000000000000000000007346545000015301 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/HaskellGADT/CFtoAbstractGADT.hs0000644000000000000000000001575307346545000020627 0ustar0000000000000000{- BNF Converter: GADT Abstract syntax Generator Copyright (C) 2004-2005 Author: Markus Forsberg, Björn Bringert -} {-# LANGUAGE PatternGuards #-} module BNFC.Backend.HaskellGADT.CFtoAbstractGADT (cf2Abstract) where import qualified Data.List as List import BNFC.CF import BNFC.Backend.HaskellGADT.HaskellGADTCommon import BNFC.Backend.Haskell.Utils import BNFC.Backend.Haskell.CFtoAbstract (definedRules) import BNFC.Options import BNFC.Utils ((+++), when) cf2Abstract :: TokenText -> String -> CF -> String -> String cf2Abstract tokenText name cf composOpMod = unlines $ concat $ [ [ "-- For GHC version 7.10 or higher" , "" , "{-# LANGUAGE GADTs, KindSignatures, DataKinds #-}" ] , [ "{-# LANGUAGE EmptyCase #-}" | emptyTree ] , [ "{-# LANGUAGE LambdaCase #-}" , "" , "{-# OPTIONS_GHC -fno-warn-unused-binds #-}" -- unused-local-binds would be sufficient, but parses only from GHC 8.0 , "{-# OPTIONS_GHC -fno-warn-unused-imports #-}" , "{-# OPTIONS_GHC -fno-warn-unused-matches #-}" , "{-# OPTIONS_GHC -fno-warn-incomplete-patterns #-}" -- defects of coverage checker, e.g. in 8.2.2, may lead to warning -- about exceeded iterations for pattern match checker , "{-# OPTIONS_GHC -fno-warn-overlapping-patterns #-}" , "" , "module" +++ name +++ "(" ++ List.intercalate ", " exports ++ ")" +++ "where" , "" , "import Prelude (" ++ typeImports ++ ", (.), (>), (&&), (==))" , "import qualified Prelude as P" -- , "import qualified Data.Kind as K (Type)" ] , tokenTextImport tokenText , [ "" , "import " ++ composOpMod , "" ] , prDummyTypes cf , [""] , prTreeType tokenText cf , [""] , prCompos cf , [""] , prShow cf , [""] , prEq cf , [""] , prOrd cf , [""] , map ((++ "\n") . show) $ definedRules False cf ] where emptyTree = null (cf2cons cf) exports = concat $ [ [ if emptyTree then "Tree" else "Tree(..)" ] , getTreeCats cf , map mkDefName $ getDefinitions cf , [ "johnMajorEq" , "module " ++ composOpMod ] ] typeImports = List.intercalate ", " $ concat [ [ "Char", "Double" ] , [ "Int" | hasPositionTokens cf ] , [ "Integer", "String" ] ] getTreeCats :: CF -> [String] getTreeCats cf = List.nub $ map catToStr $ filter (not . isList) $ map consCat $ cf2cons cf getDefinitions :: CF -> [String] getDefinitions = map (funName . defName) . definitions prDummyTypes :: CF -> [String] prDummyTypes cf = prDummyData : map prDummyType cats where cats = getTreeCats cf prDummyData | null cats = "data Tag" | otherwise = "data Tag =" +++ List.intercalate " | " (map mkRealType_ cats) prDummyType cat = "type" +++ cat +++ "= Tree" +++ mkRealType cat -- | Use in occurrences of promoted constructors. -- -- Promoted constructors should be preceded by a prime, -- otherwise we get GHC warning @unticked-promoted-constructors@. mkRealType :: String -> String mkRealType cat = "'" ++ mkRealType_ cat -- | Use in @data@ definition (for the sake of GHC <= 8.6). mkRealType_ :: String -> String mkRealType_ cat = cat ++ "_" prTreeType :: TokenText -> CF -> [String] prTreeType tokenText cf = "data Tree (a :: Tag) where" : map ((" " ++) . prTreeCons) (cf2cons cf) where prTreeCons c | TokenCat tok <- cat, isPositionCat cf tok = fun +++ ":: ((Int,Int),"++ tokenTextType tokenText ++") -> Tree" +++ mkRealType tok | otherwise = fun +++ "::" +++ concat [catToStr c +++ "-> " | (c,_) <- consVars c] ++ "Tree" +++ mkRealType (catToStr cat) where (cat,fun) = (consCat c, consFun c) prCompos :: CF -> [String] prCompos cf = ["instance Compos Tree where", " compos r a f = \\case"] ++ map (" "++) (concatMap prComposCons cs ++ ["t -> r t" | not (all isRecursive cs)]) where cs = cf2cons cf prComposCons c | isRecursive c = [consFun c +++ unwords (map snd (consVars c)) +++ "->" +++ rhs c] | otherwise = [] isRecursive c = any (isTreeType cf) (map fst (consVars c)) rhs c = "r" +++ consFun c +++ unwords (map prRec (consVars c)) where prRec (cat,var) | not (isTreeType cf cat) = "`a`" +++ "r" +++ var | isList cat = "`a` P.foldr (\\ x z -> r (:) `a` f x `a` z) (r [])" +++ var | otherwise = "`a`" +++ "f" +++ var prShow :: CF -> [String] prShow cf = ["instance P.Show (Tree c) where", " showsPrec n = \\case"] ++ map ((" "++) .prShowCons) cs ++ [" where", " opar = if n > 0 then P.showChar '(' else P.id", " cpar = if n > 0 then P.showChar ')' else P.id"] where cs = cf2cons cf prShowCons c | null vars = fun +++ "->" +++ "P.showString" +++ show fun | otherwise = fun +++ unwords (map snd vars) +++ "->" +++ "opar . P.showString" +++ show fun +++ unwords [". P.showChar ' ' . P.showsPrec 1 " ++ x | (_,x) <- vars] +++ ". cpar" where (fun, vars) = (consFun c, consVars c) prEq :: CF -> [String] prEq cf = ["instance P.Eq (Tree c) where (==) = johnMajorEq", "", "johnMajorEq :: Tree a -> Tree b -> P.Bool"] ++ map prEqCons (cf2cons cf) ++ ["johnMajorEq _ _ = P.False"] where prEqCons c | null vars = "johnMajorEq" +++ fun +++ fun +++ "=" +++ "P.True" | otherwise = "johnMajorEq" +++ "(" ++ fun +++ unwords vars ++ ")" +++ "(" ++ fun +++ unwords vars' ++ ")" +++ "=" +++ List.intercalate " && " (zipWith (\x y -> x +++ "==" +++ y) vars vars') where (fun, vars) = (consFun c, map snd (consVars c)) vars' = map (++ "_") vars prOrd :: CF -> [String] prOrd cf = concat [ [ "instance P.Ord (Tree c) where" , " compare x y = P.compare (index x) (index y) `P.mappend` compareSame x y" ] , [ "", "index :: Tree c -> P.Int" ] , zipWith mkIndex cs [0::Int ..] , when (null cs) [ "index = P.undefined" ] , [ "", "compareSame :: Tree c -> Tree c -> P.Ordering" ] , map mkCompareSame cs -- Case sometimes redundant, so we need to suppress the warning. , [ "compareSame _ _ = P.error \"BNFC error: compareSame\"" ] ] where cs = cf2cons cf mkCompareSame c | null vars = "compareSame" +++ fun +++ fun +++ "=" +++ "P.EQ" | otherwise = "compareSame" +++ "(" ++ fun +++ unwords vars ++ ")" +++ "(" ++ fun +++ unwords vars' ++ ")" +++ "=" +++ foldr1 (\x y -> "P.mappend (" ++ x ++") ("++y++")") cc where (fun, vars) = (consFun c, map snd (consVars c)) vars' = map (++"_") vars cc = zipWith (\x y -> "P.compare"+++x+++y) vars vars' mkIndex c i = "index" +++ "(" ++ consFun c +++ unwords (replicate (length (consVars c)) "_") ++ ")" +++ "=" +++ show i BNFC-2.9.5/src/BNFC/Backend/HaskellGADT/CFtoTemplateGADT.hs0000644000000000000000000000340607346545000020627 0ustar0000000000000000{- BNF Converter: GADT Template Generator Copyright (C) 2004-2005 Author: Markus Forsberg, Björn Bringert -} module BNFC.Backend.HaskellGADT.CFtoTemplateGADT (cf2Template) where import Data.List ( groupBy ) import BNFC.CF import BNFC.Utils ( ModuleName, (+++) ) import BNFC.Backend.Haskell.Utils ( noWarnUnusedMatches ) import BNFC.Backend.HaskellGADT.HaskellGADTCommon cf2Template :: ModuleName -> ModuleName -> CF -> String cf2Template skelName absName cf = unlines $ concat [ [ "{-# LANGUAGE GADTs #-}" , "{-# LANGUAGE EmptyCase #-}" , "" , noWarnUnusedMatches , "" , "module "++ skelName ++ " where" , "" , "import Prelude (Either(..), Show(..), String, ($), (++))" , "" , "import qualified " ++ absName , "" , "type Err = Either String" , "type Result = Err String" , "" , "failure :: Show a => a -> Result" , "failure x = Left $ \"Undefined case: \" ++ show x" , "" , "transTree :: " ++ qualify "Tree" ++ " c -> Result" , "transTree t = case t of" ] , map prConsCase (cf2cons cf) , [ "" ] , concatMap ((++ [""]) . uncurry prCatTrans) (catCons cf) ] where prCatTrans :: Cat -> [Constructor] -> [String] prCatTrans cat cs = concat [ [ "trans" ++ s +++ "::" +++ qualify s +++ "-> Result" , "trans" ++ s +++ "t = case t of" ] , map prConsCase cs ] where s = catToStr cat prConsCase :: Constructor -> String prConsCase c = " " ++ qualify (consFun c) +++ unwords (map snd (consVars c)) +++ "-> failure t" qualify x = concat [ absName, ".", x ] catCons :: CF -> [(Cat,[Constructor])] catCons cf = [ (consCat (head cs),cs) | cs <- groupBy catEq $ cf2cons cf] catEq :: Constructor -> Constructor -> Bool catEq c1 c2 = consCat c1 == consCat c2 BNFC-2.9.5/src/BNFC/Backend/HaskellGADT/HaskellGADTCommon.hs0000644000000000000000000000344407346545000021036 0ustar0000000000000000{- BNF Converter: Haskell GADT back-end common stuff Copyright (C) 2004-2005 Author: Markus Forsberg, Björn Bringert -} module BNFC.Backend.HaskellGADT.HaskellGADTCommon (Constructor(..), cf2cons, isTreeType) where import BNFC.CF import BNFC.Backend.Haskell.Utils ( catToVar ) data Constructor = Constructor { consCat :: Cat , consFun :: Fun , consPrec :: Integer , consVars :: [(Cat,String)] , consRhs :: [Either Cat String] } -- | Get category, function, and rhs categories paired with variable names. cf2cons :: CF -> [Constructor] cf2cons cf = [ Constructor { consCat = cat, consFun = fun, consPrec = precFun cf fun , consVars = zip cats (mkVars cats), consRhs = rhsFun cf fun } | (cat,rules) <- cf2data cf, (fun,cats) <- rules] ++ [ Constructor { consCat = TokenCat cat, consFun = cat, consPrec = 0 , consVars = [(Cat "String","str")], consRhs = [Left (Cat "String")] } | cat <- specialCats cf] where mkVars cats = mkUnique (map (catToVar []) cats) (0 :: Int) mkUnique [] _ = [] mkUnique (x:xs) n | x `elem` xs || n > 0 = (x ++ show n) : mkUnique xs (n+1) | otherwise = x : mkUnique xs n -- | Get the rule for a function. ruleFun :: CF -> Fun -> Rule ruleFun cf f = head $ filter ((f ==) . funName . funRule) $ cfgRules cf -- | Get the precedence of a function. precFun :: CF -> Fun -> Integer precFun cf f = precRule $ ruleFun cf f -- | Get the RHS of a function rhsFun :: CF -> Fun -> [Either Cat String] rhsFun cf f = rhsRule $ ruleFun cf f isTreeType :: CF -> Cat -> Bool isTreeType cf (TokenCat c) = c `elem` specialCats cf isTreeType cf c | isList c = isTreeType cf (catOfList c) | otherwise = c `elem` reallyAllCats cf BNFC-2.9.5/src/BNFC/Backend/Java.hs0000644000000000000000000006172307346545000014504 0ustar0000000000000000{-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE RecordWildCards #-} {- BNF Converter: Java Top File Copyright (C) 2004 Author: Markus Forsberg, Peter Gammie, Michael Pellauer, Bjorn Bringert -} ------------------------------------------------------------------- -- | -- Module : JavaTop -- Copyright : (C)opyright 2003, {markus, aarne, pellauer, peteg, bringert} at cs dot chalmers dot se -- -- Maintainer : {markus, aarne} at cs dot chalmers dot se -- Stability : alpha -- Portability : Haskell98 -- -- Top-level for the Java back end. -- -- > $Id: JavaTop15.hs,v 1.12 2007/01/08 18:20:23 aarne Exp $ ------------------------------------------------------------------- module BNFC.Backend.Java ( makeJava ) where import Prelude hiding ((<>)) import System.FilePath ((), (<.>), pathSeparator, isPathSeparator) import Data.Foldable (toList) import Data.List ( intersperse ) import BNFC.Utils import BNFC.CF import BNFC.Options as Options import BNFC.Backend.Base import BNFC.Backend.Java.Utils import BNFC.Backend.Java.CFtoCup15 ( cf2Cup ) import BNFC.Backend.Java.CFtoJLex15 import BNFC.Backend.Java.CFtoAntlr4Lexer import BNFC.Backend.Java.CFtoAntlr4Parser import BNFC.Backend.Java.CFtoJavaAbs15 ( cf2JavaAbs ) import BNFC.Backend.Java.CFtoJavaPrinter15 import BNFC.Backend.Java.CFtoVisitSkel15 import BNFC.Backend.Java.CFtoComposVisitor import BNFC.Backend.Java.CFtoAbstractVisitor import BNFC.Backend.Java.CFtoFoldVisitor import BNFC.Backend.Java.CFtoAllVisitor import BNFC.Backend.Common.NamedVariables (SymEnv, firstLowerCase) import qualified BNFC.Backend.Common.Makefile as Makefile import BNFC.PrettyPrint ------------------------------------------------------------------- -- | Build the Java output. ------------------------------------------------------------------- -- | This creates the Java files. makeJava :: SharedOptions -> CF -> MkFiles () makeJava opt = makeJava' pkg opt{ lang = lang' } -- issue #212: make a legal package name, see also -- https://docs.oracle.com/javase/tutorial/java/package/namingpkgs.html where pkg = mkName javaReserved SnakeCase $ lang opt lang' = capitalize $ mkName javaReserved CamelCase $ lang opt makeJava' :: String -- ^ Legal package name derived from language name. -> SharedOptions -> CF -> MkFiles () makeJava' pkg options@Options{..} cf = do -- Create the package directories if necessary. let packageBase = maybe id (+.+) inPackage pkg packageAbsyn = packageBase +.+ "Absyn" dirBase = pkgToDir packageBase dirAbsyn = pkgToDir packageAbsyn javaex str = dirBase str <.> "java" bnfcfiles = bnfcVisitorsAndTests packageBase packageAbsyn cf cf2JavaPrinter cf2VisitSkel cf2ComposVisitor cf2AbstractVisitor cf2FoldVisitor cf2AllVisitor (testclass parselexspec (head $ results lexmake) -- lexer class (head $ results parmake) -- parser class ) makebnfcfile x = mkfile (javaex (fst $ x bnfcfiles)) comment (snd $ x bnfcfiles) let absynFiles = remDups $ cf2JavaAbs dirAbsyn packageBase packageAbsyn cf rp absynFileNames = map fst absynFiles mapM_ (\ (n, s) -> mkfile (n <.> "java") comment s) absynFiles makebnfcfile bprettyprinter makebnfcfile bskel makebnfcfile bcompos makebnfcfile babstract makebnfcfile bfold makebnfcfile ball makebnfcfile btest let (lex, env) = lexfun packageBase cf -- Where the lexer file is created. lex is the content! mkfile (dirBase inputfile lexmake ) commentWithEmacsModeHint lex liftIO $ putStrLn $ " (Tested with" +++ toolname lexmake +++ toolversion lexmake ++ ")" -- where the parser file is created. mkfile (dirBase inputfile parmake) commentWithEmacsModeHint $ parsefun packageBase packageAbsyn cf rp env liftIO $ putStrLn $ if supportsEntryPoints parmake then "(Parser created for all categories)" else " (Parser created only for category " ++ prettyShow (firstEntry cf) ++ ")" liftIO $ putStrLn $ " (Tested with" +++ toolname parmake +++ toolversion parmake ++ ")" Makefile.mkMakefile optMake $ makefile dirBase dirAbsyn absynFileNames parselexspec where remDups [] = [] remDups ((a,b):as) = case lookup a as of Just {} -> remDups as Nothing -> (a, b) : remDups as pkgToDir :: String -> FilePath pkgToDir = replace '.' pathSeparator parselexspec = parserLexerSelector lang javaLexerParser rp lexfun = cf2lex $ lexer parselexspec parsefun = cf2parse $ parser parselexspec parmake = makeparserdetails (parser parselexspec) lexmake = makelexerdetails (lexer parselexspec) rp = (Options.linenumbers options) commentWithEmacsModeHint = comment . ("-*- Java -*- " ++) makefile :: FilePath -> FilePath -> [String] -> ParserLexerSpecification -> String -> Doc makefile dirBase dirAbsyn absynFileNames jlexpar basename = vcat $ makeVars [ ("JAVAC", "javac"), ("JAVAC_FLAGS", "-sourcepath ."), ( "JAVA", "java"), ( "JAVA_FLAGS", ""), -- parser executable ( "PARSER", executable parmake), -- parser flags ( "PARSER_FLAGS", flags parmake dirBase), -- lexer executable (and flags?) ( "LEXER", executable lexmake), ( "LEXER_FLAGS", flags lexmake dirBase) ] ++ makeRules [ ("all", [ "test" ], []), ( "test", "absyn" : classes, []), ( ".PHONY", ["absyn"], []), ("%.class", [ "%.java" ], [ runJavac "$^" ]), ("absyn", [absynJavaSrc],[ runJavac "$^" ]) ]++ [-- running the lexergen: output of lexer -> input of lexer : calls lexer let ff = filename lexmake -- name of input file without extension dirBaseff = dirBase ff -- prepend directory inp = dirBase inputfile lexmake in Makefile.mkRule (dirBaseff <.> "java") [ inp ] [ "${LEXER} ${LEXER_FLAGS} "++ inp ] -- running the parsergen, these there are its outputs -- output of parser -> input of parser : calls parser , let inp = dirBase inputfile parmake in Makefile.mkRule (unwords (map (dirBase ) (dotJava $ results parmake))) [ inp ] $ ("${PARSER} ${PARSER_FLAGS} " ++ inp) : ["mv " ++ unwords (dotJava $ results parmake) +++ dirBase ++ [pathSeparator] | moveresults parmake] -- Class of the output of lexer generator wants java of : -- output of lexer and parser generator , let lexerOutClass = dirBase filename lexmake <.> "class" outname x = dirBase x <.> "java" deps = map outname (results lexmake ++ results parmake) in Makefile.mkRule lexerOutClass deps [] ]++ reverse [Makefile.mkRule tar dep [] | (tar,dep) <- partialParserGoals dirBase (results parmake)] ++[ Makefile.mkRule (dirBase "PrettyPrinter.class") [ dirBase "PrettyPrinter.java" ] [] -- Removes all the class files created anywhere , Makefile.mkRule "clean" [] [ "rm -f " ++ dirAbsyn "*.class" ++ " " ++ dirBase "*.class" ] -- Remains the same , Makefile.mkRule "distclean" [ "vclean" ] [] -- removes everything , Makefile.mkRule "vclean" [] [ " rm -f " ++ absynJavaSrc ++ " " ++ absynJavaClass , " rm -f " ++ dirAbsyn "*.class" , " rmdir " ++ dirAbsyn , " rm -f " ++ unwords (map (dirBase ) $ [ inputfile lexmake , inputfile parmake ] ++ dotJava (results lexmake) ++ [ "VisitSkel.java" , "ComposVisitor.java" , "AbstractVisitor.java" , "FoldVisitor.java" , "AllVisitor.java" , "PrettyPrinter.java" , "Skeleton.java" , "Test.java" ] ++ dotJava (results parmake) ++ ["*.class"] ++ other_results lexmake ++ other_results parmake) , " rm -f " ++ basename , " rmdir -p " ++ dirBase ] ] where makeVars x = [Makefile.mkVar n v | (n,v) <- x] makeRules x = [Makefile.mkRule tar dep recipe | (tar, dep, recipe) <- x] parmake = makeparserdetails (parser jlexpar) lexmake = makelexerdetails (lexer jlexpar) absynJavaSrc = unwords (dotJava absynFileNames) absynJavaClass = unwords (dotClass absynFileNames) classes = map (dirBase ) lst lst = dotClass (results lexmake) ++ [ "PrettyPrinter.class", "Test.class" , "VisitSkel.class" , "ComposVisitor.class", "AbstractVisitor.class" , "FoldVisitor.class", "AllVisitor.class"]++ dotClass (results parmake) ++ ["Test.class"] type TestClass = String -- ^ class of the lexer -> String -- ^ class of the parser -> String -- ^ package where the non-abstract syntax classes are created -> String -- ^ package where the abstract syntax classes are created -> CF -- ^ the CF bundle -> String -- | Record to name arguments of 'javaTest'. data JavaTestParams = JavaTestParams { jtpImports :: [Doc] -- ^ List of imported packages. , jtpErr :: String -- ^ Name of the exception thrown in case of parsing failure. , jtpErrHand :: (String -> [Doc]) -- ^ Handler for the exception thrown. , jtpLexerConstruction :: (Doc -> Doc -> Doc) -- ^ Function formulating the construction of the lexer object. , jtpParserConstruction :: (Doc -> Doc -> Doc) -- ^ As above, for parser object. , jtpShowAlternatives :: ([Cat] -> [Doc]) -- ^ Pretty-print the names of the methods corresponding to entry points to the user. , jtpInvocation :: (Doc -> Doc -> Doc -> Doc -> Doc) -- ^ Function formulating the invocation of the parser tool within Java. , jtpErrMsg :: String -- ^ Error string output in consequence of a parsing failure. } -- | Test class details for J(F)Lex + CUP cuptest :: TestClass cuptest = javaTest $ JavaTestParams { jtpImports = ["java_cup.runtime"] , jtpErr = "Throwable" , jtpErrHand = const [] , jtpLexerConstruction = \ x i -> x <> i <> ";" , jtpParserConstruction = \ x i -> x <> "(" <> i <> ", " <> i <> ".getSymbolFactory());" , jtpShowAlternatives = const $ ["not available."] , jtpInvocation = \ _ pabs dat enti -> hcat [ pabs, ".", dat, " ast = p.p", enti, "();" ] , jtpErrMsg = unwords $ [ "At line \" + String.valueOf(t.l.line_num()) + \"," , "near \\\"\" + t.l.buff() + \"\\\" :" ] } -- | Test class details for ANTLR4 antlrtest :: TestClass antlrtest = javaTest $ JavaTestParams { jtpImports = [ "org.antlr.v4.runtime" , "org.antlr.v4.runtime.atn" , "org.antlr.v4.runtime.dfa" , "java.util" ] , jtpErr = "TestError" , jtpErrHand = antlrErrorHandling , jtpLexerConstruction = \ x i -> vcat [ x <> "(new ANTLRInputStream" <> i <>");" , "l.addErrorListener(new BNFCErrorListener());" ] , jtpParserConstruction = \ x i -> vcat [ x <> "(new CommonTokenStream(" <> i <>"));" , "p.addErrorListener(new BNFCErrorListener());" ] , jtpShowAlternatives = showOpts , jtpInvocation = \ pbase pabs dat enti -> vcat [ let rulename = getRuleName $ startSymbol $ render enti typename = text rulename methodname = text $ firstLowerCase rulename in pbase <> "." <> typename <> "Context pc = p." <> methodname <> "();" , pabs <> "." <> dat <+> "ast = pc.result;" ] , jtpErrMsg = "At line \" + e.line + \", column \" + e.column + \" :" } where showOpts [] = [] showOpts (x:xs) | normCat x /= x = showOpts xs | otherwise = text (firstLowerCase $ identCat x) : showOpts xs parserLexerSelector :: String -> JavaLexerParser -> RecordPositions -- ^Pass line numbers to the symbols -> ParserLexerSpecification parserLexerSelector _ JLexCup rp = ParseLexSpec { lexer = cf2JLex rp , parser = cf2cup rp , testclass = cuptest } parserLexerSelector _ JFlexCup rp = (parserLexerSelector "" JLexCup rp){lexer = cf2JFlex rp} parserLexerSelector l Antlr4 _ = ParseLexSpec { lexer = cf2AntlrLex' l , parser = cf2AntlrParse' l , testclass = antlrtest } data ParserLexerSpecification = ParseLexSpec { parser :: CFToParser , lexer :: CFToLexer , testclass :: TestClass } -- |CF -> LEXER GENERATION TOOL BRIDGE -- | function translating the CF to an appropriate lexer generation tool. type CF2LexerFunction = String -> CF -> (Doc, SymEnv) -- Chooses the translation from CF to the lexer data CFToLexer = CF2Lex { cf2lex :: CF2LexerFunction , makelexerdetails :: MakeFileDetails } -- | Instances of cf-lexergen bridges cf2JLex :: RecordPositions -> CFToLexer cf2JLex rp = CF2Lex { cf2lex = cf2jlex JLexCup rp , makelexerdetails = jlexmakedetails } cf2JFlex :: RecordPositions -> CFToLexer cf2JFlex rp = CF2Lex { cf2lex = cf2jlex JFlexCup rp , makelexerdetails = jflexmakedetails } cf2AntlrLex' :: String -> CFToLexer cf2AntlrLex' l = CF2Lex { cf2lex = const $ cf2AntlrLex l , makelexerdetails = antlrmakedetails $ l ++ "Lexer" } -- | CF -> PARSER GENERATION TOOL BRIDGE -- | function translating the CF to an appropriate parser generation tool. type CF2ParserFunction = String -> String -> CF -> RecordPositions -> SymEnv -> String -- | Chooses the translation from CF to the parser data CFToParser = CF2Parse { cf2parse :: CF2ParserFunction , makeparserdetails :: MakeFileDetails } -- | Instances of cf-parsergen bridges cf2cup :: RecordPositions -> CFToParser cf2cup rp = CF2Parse { cf2parse = cf2Cup , makeparserdetails = cupmakedetails rp } cf2AntlrParse' :: String -> CFToParser cf2AntlrParse' l = CF2Parse { cf2parse = const $ cf2AntlrParse l , makeparserdetails = antlrmakedetails $ l ++ "Parser" } -- | shorthand for Makefile command running javac or java runJavac , runJava:: String -> String runJava = mkRunProgram "JAVA" runJavac = mkRunProgram "JAVAC" -- | function returning a string executing a program contained in a variable j -- on input s mkRunProgram :: String -> String -> String mkRunProgram j s = Makefile.refVar j +++ Makefile.refVar (j +-+ "FLAGS") +++ s type OutputDirectory = String -- | Makefile details from running the parser-lexer generation tools. data MakeFileDetails = MakeDetails { -- | The string that executes the generation tool. executable :: String , -- | Flags to pass to the tool. flags :: OutputDirectory -> String , -- | Input file to the tool. filename :: String , -- | Extension of input file to the tool. fileextension :: String , -- | Name of the tool. toolname :: String , -- | Tool version. toolversion :: String , -- | True if the tool is a parser and supports entry points, -- False otherwise. supportsEntryPoints :: Bool , -- | List of names (without extension!) of files resulting from the -- application of the tool which are relevant to a make rule. results :: [String] , -- | List of names of files resulting from the application of -- the tool which are irrelevant to the make rules but need to be cleaned. other_results :: [String] , -- | If True, the files are moved to the base directory, otherwise -- they are left where they are. moveresults :: Bool } -- Instances of makefile details. jlexmakedetails :: MakeFileDetails jlexmakedetails = MakeDetails { executable = runJava "JLex.Main" , flags = const "" , filename = "Yylex" , fileextension = "" , toolname = "JLex" , toolversion = "1.2.6" , supportsEntryPoints = False , results = ["Yylex"] , other_results = [] , moveresults = False } jflexmakedetails :: MakeFileDetails jflexmakedetails = jlexmakedetails { executable = "jflex" , toolname = "JFlex" , toolversion = "1.4.3 - 1.9.1" } cupmakedetails :: RecordPositions -> MakeFileDetails cupmakedetails rp = MakeDetails { executable = runJava "java_cup.Main" , flags = const (lnFlags ++ " -expect 100") , filename = "_cup" , fileextension = "cup" , toolname = "CUP" , toolversion = "0.11b" , supportsEntryPoints = False , results = ["parser", "sym"] , other_results = [] , moveresults = True } where lnFlags = if rp == RecordPositions then "-locations" else "-nopositions" antlrmakedetails :: String -> MakeFileDetails antlrmakedetails l = MakeDetails { executable = runJava "org.antlr.v4.Tool" , flags = \ path -> unwords $ let pointed = map cnv path cnv y = if isPathSeparator y then '.' else y in [ "-lib", path , "-package", pointed] , filename = l , fileextension = "g4" , toolname = "ANTLRv4" , toolversion = "4.9" , supportsEntryPoints = True , results = [l] , other_results = map (l ++) [ ".interp" -- added after ANTLR 4.5 , ".tokens" , "BaseListener.java" ,"Listener.java" ] , moveresults = False } dotJava :: [String] -> [String] dotJava = map (<.> "java") dotClass :: [String] -> [String] dotClass = map (<.> "class") type CFToJava = String -> String -> CF -> String -- | Contains the pairs filename/content for all the non-abstract syntax files -- generated by BNFC. data BNFCGeneratedEntities = BNFCGenerated { bprettyprinter :: (String, String) , btest :: (String, String) , bcompos :: (String, String) , babstract :: (String, String) , bfold :: (String, String) , ball :: (String, String) , bskel :: (String, String) } bnfcVisitorsAndTests :: String -> String -> CF -> CFToJava -> CFToJava -> CFToJava -> CFToJava -> CFToJava -> CFToJava -> CFToJava -> BNFCGeneratedEntities bnfcVisitorsAndTests pbase pabsyn cf cf0 cf1 cf2 cf3 cf4 cf5 cf6 = BNFCGenerated { bprettyprinter = ( "PrettyPrinter" , app cf0) , bskel = ( "VisitSkel", app cf1) , bcompos = ( "ComposVisitor" , app cf2) , babstract = ( "AbstractVisitor" , app cf3) , bfold = ( "FoldVisitor", app cf4) , ball = ( "AllVisitor", app cf5) , btest = ( "Test" , app cf6) } where app x = x pbase pabsyn cf inputfile :: MakeFileDetails -> String inputfile x | null (fileextension x) = filename x | otherwise = filename x <.> fileextension x -- | constructs the rules regarding the parser in the makefile partialParserGoals :: String -> [String] -> [(String, [String])] partialParserGoals _ [] = [] partialParserGoals dirBase (x:rest) = (dirBase x <.> "class", map (\ y -> dirBase y <.> "java") (x:rest)) : partialParserGoals dirBase rest -- | Creates the Test.java class. javaTest :: JavaTestParams -> TestClass javaTest (JavaTestParams imports err errhand lexerconstruction parserconstruction showOpts invocation errmsg) lexer parser packageBase packageAbsyn cf = render $ vcat $ concat $ [ [ "package" <+> text packageBase <> ";" , "" , "import" <+> text packageBase <> ".*;" , "import java.io.*;" ] , map importfun imports , [ "" ] , errhand err , [ "" , "public class Test" , codeblock 2 [ lx <+> "l;" , px <+> "p;" , "" , "public Test(String[] args)" , codeblock 2 [ "try" , codeblock 2 [ "Reader input;" , "if (args.length == 0) input = new InputStreamReader(System.in);" , "else input = new FileReader(args[0]);" , "l = new " <> lexerconstruction lx "(input)" ] , "catch(IOException e)" , codeblock 2 [ "System.err.println(\"Error: File not found: \" + args[0]);" , "System.exit(1);" ] , "p = new "<> parserconstruction px "l" ] , "" , "public" <+> text packageAbsyn <> "." <> dat <+> "parse() throws Exception" , codeblock 2 $ concat [ [ "/* The default parser is the first-defined entry point. */" ] , unlessNull (drop 1 eps) $ \ eps' -> [ "/* Other options are: */" , "/* " <> fsep (punctuate "," (showOpts eps')) <> " */" ] , [ invocation px (text packageAbsyn) dat absentity , printOuts [ "\"Parse Successful!\"" , "\"[Abstract Syntax]\"" , "PrettyPrinter.show(ast)" , "\"[Linearized Tree]\"" , "PrettyPrinter.print(ast)" ] , "return ast;" ] ] , "" , "public static void main(String args[]) throws Exception" , codeblock 2 [ "Test t = new Test(args);" , "try" , codeblock 2 [ "t.parse();" ] ,"catch(" <> text err <+> "e)" , codeblock 2 [ "System.err.println(\"" <> text errmsg <> "\");" , "System.err.println(\" \" + e.getMessage());" , "System.exit(1);" ] ] ] ] ] where printOuts x = vcat $ map javaPrintOut (messages x) messages x = "" : intersperse "" x javaPrintOut x = text $ "System.out.println(" ++ x ++ ");" importfun x = "import" <+> x <> ".*;" lx = text lexer px = text parser dat = text $ identCat $ normCat def -- Use for AST types. absentity = text $ identCat def -- Use for parser/printer name. eps = toList $ allEntryPoints cf def = head eps -- | Error handling in ANTLR. -- By default, ANTLR does not stop after any parsing error and attempts to go -- on, delivering what it has been able to parse. -- It does not throw any exception, unlike J(F)lex+CUP. -- The below code makes the test class behave as with J(F)lex+CUP. antlrErrorHandling :: String -> [Doc] antlrErrorHandling te = [ "class"<+>tedoc<+>"extends RuntimeException" , codeblock 2 [ "int line;" , "int column;" , "public"<+>tedoc<>"(String msg, int l, int c)" , codeblock 2 [ "super(msg);" , "line = l;" , "column = c;" ] ] , "class BNFCErrorListener implements ANTLRErrorListener" , codeblock 2 [ "@Override" , "public void syntaxError(Recognizer recognizer, Object o, int i" <> ", int i1, String s, RecognitionException e)" , codeblock 2 [ "throw new"<+>tedoc<>"(s,i,i1);"] , "@Override" , "public void reportAmbiguity(Parser parser, DFA dfa, int i, int i1, " <>"boolean b, BitSet bitSet, ATNConfigSet atnConfigSet)" , codeblock 2[ "throw new"<+>tedoc<>"(\"Ambiguity at\",i,i1);" ] , "@Override" , "public void reportAttemptingFullContext(Parser parser, DFA dfa, " <>"int i, int i1, BitSet bitSet, ATNConfigSet atnConfigSet)" , codeblock 2 [] , "@Override" ,"public void reportContextSensitivity(Parser parser, DFA dfa, int i, " <>"int i1, int i2, ATNConfigSet atnConfigSet)" ,codeblock 2 [] ] ] where tedoc = text te BNFC-2.9.5/src/BNFC/Backend/Java/0000755000000000000000000000000007346545000014137 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/Java/CFtoAbstractVisitor.hs0000644000000000000000000000312007346545000020366 0ustar0000000000000000{- BNF Converter: Java 1.5 Abstract Vistor generator Copyright (C) 2006 Bjorn Bringert Based on CFtoVisitSkel.hs, Copyright (C) 2004-2006 Michael Pellauer -} module BNFC.Backend.Java.CFtoAbstractVisitor (cf2AbstractVisitor) where import BNFC.CF import BNFC.Utils ((+++)) import BNFC.Backend.Common.NamedVariables cf2AbstractVisitor :: String -> String -> CF -> String cf2AbstractVisitor packageBase packageAbsyn cf = unlines [ "package" +++ packageBase ++ ";" , "" , "/** Abstract Visitor */" , "" , "public class AbstractVisitor implements AllVisitor {" , concatMap (prData packageAbsyn user) groups , "}" ] where user = map fst $ tokenPragmas cf groups = [ g | g@(c,_) <- fixCoercions (ruleGroupsInternals cf), not (isList c) ] --Traverses a category based on its type. prData :: String -> [UserDef] -> (Cat, [Rule]) -> String prData packageAbsyn user (cat, rules) = unlines $ concat [ [ " /* " ++ identCat cat ++ " */" ] , concatMap (prRule packageAbsyn user cat) rules , [ " public R visitDefault(" ++ packageAbsyn ++ "." ++ identCat cat ++ " p, A arg) {" , " throw new IllegalArgumentException(this.getClass()" ++ ".getName() + \": \" + p);" , " }" ] ] --traverses a standard rule. prRule :: String -> [UserDef] -> Cat -> Rule -> [String] prRule packageAbsyn _ _ (Rule fun _ _ _) | not (isCoercion fun || isDefinedRule fun) = return $ concat [ " public R visit(" , packageAbsyn ++ "." ++ funName fun , " p, A arg) { return visitDefault(p, arg); }" ] | otherwise = [] BNFC-2.9.5/src/BNFC/Backend/Java/CFtoAllVisitor.hs0000644000000000000000000000177107346545000017345 0ustar0000000000000000{- BNF Converter: Java 1.5 All Visitor generator Copyright (C) 2006 Bjorn Bringert Based on CFtoVisitSkel.hs, Copyright (C) 2004-2006 Michael Pellauer -} module BNFC.Backend.Java.CFtoAllVisitor (cf2AllVisitor) where import Data.List (intercalate) import BNFC.CF import BNFC.Utils ((+++)) import BNFC.Backend.Common.NamedVariables cf2AllVisitor :: String -> String -> CF -> String cf2AllVisitor packageBase packageAbsyn cf = unlines $ concat [ [ "package" +++ packageBase ++ ";" , "" , "/** All Visitor */" , "" , "public interface AllVisitor" ++ if null is then "" else " extends" ] , [ intercalate ",\n" $ map (" "++) is | not $ null is ] , [ "{}" ] ] where groups = [ g | g@(c,_) <- fixCoercions (ruleGroupsInternals cf), not (isList c) ] is = map (prInterface packageAbsyn) groups prInterface :: String -> (Cat, [Rule]) -> String prInterface packageAbsyn (cat, _) = q ++ ".Visitor" where q = packageAbsyn ++ "." ++ identCat cat BNFC-2.9.5/src/BNFC/Backend/Java/CFtoAntlr4Lexer.hs0000644000000000000000000001272207346545000017417 0ustar0000000000000000{-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} {- BNF Converter: Java Antlr4 Lexer generator Copyright (C) 2015 Author: Gabriele Paganelli Description : This module generates the Antlr4 input file. Based on CFtoJLex15.hs Author : Gabriele Paganelli (gapag@distruzione.org) Created : 15 Oct, 2015 -} module BNFC.Backend.Java.CFtoAntlr4Lexer ( cf2AntlrLex ) where import Prelude hiding ((<>)) import Text.PrettyPrint import BNFC.CF import BNFC.Backend.Java.RegToAntlrLexer import BNFC.Backend.Common.NamedVariables -- | Creates a lexer grammar. -- Since antlr token identifiers must start with an uppercase symbol, -- I prepend "Surrogate_id_SYMB_" to the identifier. -- This introduces risks of clashes if somebody uses the same identifier for -- user defined tokens. This is not handled. -- returns the environment because the parser uses it. cf2AntlrLex :: String -> CF -> (Doc, KeywordEnv) cf2AntlrLex lang cf = (,env) $ vcat [ prelude lang , cMacros -- unnamed symbols (those in quotes, not in token definitions) , lexSymbols env , restOfLexerGrammar cf ] where env = zip (cfgSymbols cf ++ reservedWords cf) $ map (("Surrogate_id_SYMB_" ++) . show) [0 :: Int ..] -- | File prelude prelude :: String -> Doc prelude lang = vcat [ "// Lexer definition for use with Antlr4" , "lexer grammar" <+> text lang <> "Lexer;" ] --For now all categories are included. --Optimally only the ones that are used should be generated. cMacros :: Doc cMacros = vcat [ "// Predefined regular expressions in BNFC" , frg "LETTER : CAPITAL | SMALL" , frg "CAPITAL : [A-Z\\u00C0-\\u00D6\\u00D8-\\u00DE]" , frg "SMALL : [a-z\\u00DF-\\u00F6\\u00F8-\\u00FF]" , frg "DIGIT : [0-9]" ] where frg a = "fragment" <+> a <+> ";" escapeChars :: String -> String escapeChars = concatMap escapeCharInSingleQuotes -- | -- >>> lexSymbols [("foo","bar")] -- bar : 'foo' ; -- >>> lexSymbols [("\\","bar")] -- bar : '\\' ; -- >>> lexSymbols [("/","bar")] -- bar : '/' ; -- >>> lexSymbols [("~","bar")] -- bar : '~' ; lexSymbols :: KeywordEnv -> Doc lexSymbols ss = vcat $ map transSym ss where transSym (s,r) = text r <> " : '" <> text (escapeChars s) <> "' ;" -- | Writes rules for user defined tokens, and, if used, the predefined BNFC tokens. restOfLexerGrammar :: CF -> Doc restOfLexerGrammar cf = vcat [ lexComments (comments cf) , "" , userDefTokens , ifString strdec , ifChar chardec , ifC catDouble [ "// Double predefined token type", "DOUBLE : DIGIT+ '.' DIGIT+ ('e' '-'? DIGIT+)?;" ] , ifC catInteger [ "//Integer predefined token type", "INTEGER : DIGIT+;" ] , ifC catIdent [ "// Identifier token type" , "fragment" , "IDENTIFIER_FIRST : LETTER | '_';", "IDENT : IDENTIFIER_FIRST (IDENTIFIER_FIRST | DIGIT)*;" ] , "// Whitespace" , "WS : (' ' | '\\r' | '\\t' | '\\n' | '\\f')+ -> skip;" , "// Escapable sequences" , "fragment" , "Escapable : ('\"' | '\\\\' | 'n' | 't' | 'r' | 'f');" , "ErrorToken : . ;" , ifString stringmodes , ifChar charmodes ] where ifC cat s = if isUsedCat cf (TokenCat cat) then vcat s else "" ifString = ifC catString ifChar = ifC catChar strdec = [ "// String token type" , "STRING : '\"' -> more, mode(STRINGMODE);" ] chardec = ["CHAR : '\\'' -> more, mode(CHARMODE);"] userDefTokens = vcat [ text name <> " : " <> text (printRegJLex exp) <> ";" | (name, exp) <- tokenPragmas cf ] stringmodes = [ "mode STRESCAPE;" , "STRESCAPED : Escapable -> more, popMode ;" , "mode STRINGMODE;" , "STRINGESC : '\\\\' -> more , pushMode(STRESCAPE);" , "STRINGEND : '\"' -> type(STRING), mode(DEFAULT_MODE);" , "STRINGTEXT : ~[\"\\\\] -> more;" ] charmodes = [ "mode CHARMODE;" , "CHARANY : ~['\\\\] -> more, mode(CHAREND);" , "CHARESC : '\\\\' -> more, pushMode(CHAREND),pushMode(ESCAPE);" , "mode ESCAPE;" , "ESCAPED : (Escapable | '\\'') -> more, popMode ;" , "mode CHAREND;" , "CHARENDC : '\\'' -> type(CHAR), mode(DEFAULT_MODE);" ] lexComments :: ([(String, String)], [String]) -> Doc lexComments ([],[]) = "" lexComments (m,s) = vcat (prod "COMMENT_antlr_builtin" lexSingleComment s ++ prod "MULTICOMMENT_antlr_builtin" lexMultiComment m ) where prod bg lc ty = [bg, ": ("] ++ punctuate "|" (map lc ty) ++ skiplex skiplex = [") -> skip;"] -- | Create lexer rule for single-line comments. -- -- >>> lexSingleComment "--" -- '--' ~[\r\n]* (('\r'? '\n')|EOF) -- -- >>> lexSingleComment "\"" -- '"' ~[\r\n]* (('\r'? '\n')|EOF) lexSingleComment :: String -> Doc lexSingleComment c = "'" <>text (escapeChars c) <> "' ~[\\r\\n]* (('\\r'? '\\n')|EOF)" -- | Create lexer rule for multi-lines comments. -- -- There might be a possible bug here if a language includes 2 multi-line -- comments. They could possibly start a comment with one character and end it -- with another. However this seems rare. -- -- >>> lexMultiComment ("{-", "-}") -- '{-' (.)*? '-}' -- -- >>> lexMultiComment ("\"'", "'\"") -- '"\'' (.)*? '\'"' lexMultiComment :: (String, String) -> Doc lexMultiComment (b,e) = "'" <> text (escapeChars b) <>"' (.)*? '"<> text (escapeChars e) <> "'" BNFC-2.9.5/src/BNFC/Backend/Java/CFtoAntlr4Parser.hs0000644000000000000000000002025207346545000017571 0ustar0000000000000000{- BNF Converter: Antlr4 Java 1.8 Generator Copyright (C) 2004 Author: Markus Forsberg, Michael Pellauer, Bjorn Bringert Description : This module generates the ANTLR .g4 input file. It follows the same basic structure of CFtoHappy. Author : Gabriele Paganelli (gapag@distruzione.org) Created : 15 Oct, 2015 -} {-# LANGUAGE LambdaCase #-} module BNFC.Backend.Java.CFtoAntlr4Parser ( cf2AntlrParse ) where import Data.Foldable ( toList ) import Data.List ( intercalate ) import Data.Maybe import BNFC.CF import BNFC.Options ( RecordPositions(..) ) import BNFC.Utils ( (+++), (+.+), applyWhen ) import BNFC.Backend.Java.Utils import BNFC.Backend.Common.NamedVariables -- Type declarations -- | A definition of a non-terminal by all its rhss, -- together with parse actions. data PDef = PDef { _pdNT :: Maybe String -- ^ If given, the name of the lhss. Usually computed from 'pdCat'. , _pdCat :: Cat -- ^ The category to parse. , _pdAlts :: [(Pattern, Action, Maybe Fun)] -- ^ The possible rhss with actions. If 'null', skip this 'PDef'. -- Where 'Nothing', skip ANTLR rule label. } type Rules = [PDef] type Pattern = String type Action = String type MetaVar = (String, Cat) -- | Creates the ANTLR parser grammar for this CF. --The environment comes from CFtoAntlr4Lexer cf2AntlrParse :: String -> String -> CF -> RecordPositions -> KeywordEnv -> String cf2AntlrParse lang packageAbsyn cf _ env = unlines $ concat [ [ header , tokens , "" -- Generate start rules [#272] -- _X returns [ dX result ] : x=X EOF { $result = $x.result; } , prRules packageAbsyn $ map entrypoint $ toList $ allEntryPoints cf -- Generate regular rules , prRules packageAbsyn $ rulesForAntlr4 packageAbsyn cf env ] ] where header :: String header = unlines [ "// Parser definition for use with ANTLRv4" , "parser grammar" +++ lang ++ "Parser;" ] tokens :: String tokens = unlines [ "options {" , " tokenVocab = " ++ lang ++ "Lexer;" , "}" ] -- | Generate start rule to help ANTLR. -- -- @start_X returns [ X result ] : x=X EOF { $result = $x.result; } # Start_X@ -- entrypoint :: Cat -> PDef entrypoint cat = PDef (Just nt) cat [(pat, act, fun)] where nt = firstLowerCase $ startSymbol $ identCat cat pat = "x=" ++ catToNT cat +++ "EOF" act = "$result = $x.result;" fun = Nothing -- No ANTLR Rule label, ("Start_" ++ identCat cat) conflicts with lhs. --The following functions are a (relatively) straightforward translation --of the ones in CFtoHappy.hs rulesForAntlr4 :: String -> CF -> KeywordEnv -> Rules rulesForAntlr4 packageAbsyn cf env = map mkOne getrules where getrules = ruleGroups cf mkOne (cat,rules) = constructRule packageAbsyn cf env rules cat -- | For every non-terminal, we construct a set of rules. A rule is a sequence of -- terminals and non-terminals, and an action to be performed. constructRule :: String -> CF -> KeywordEnv -> [Rule] -> NonTerminal -> PDef constructRule packageAbsyn cf env rules nt = PDef Nothing nt $ [ ( p , generateAction packageAbsyn nt (funRule r) m b , Nothing -- labels not needed for BNFC-generated AST parser -- , Just label -- -- Did not work: -- -- , if firstLowerCase (getLabelName label) -- -- == getRuleName (firstLowerCase $ identCat nt) then Nothing else Just label ) | (index, r0) <- zip [1..] rules , let b = isConsFun (funRule r0) && elem (valCat r0) (cfgReversibleCats cf) , let r = applyWhen b revSepListRule r0 , let (p,m0) = generatePatterns index env r , let m = applyWhen b reverse m0 -- , let label = funRule r ] -- Generates a string containing the semantic action. generateAction :: IsFun f => String -> NonTerminal -> f -> [MetaVar] -> Bool -- ^ Whether the list should be reversed or not. -- Only used if this is a list rule. -> Action generateAction packageAbsyn nt f ms rev | isNilFun f = "$result = new " ++ c ++ "();" | isOneFun f = "$result = new " ++ c ++ "(); $result.addLast(" ++ p_1 ++ ");" | isConsFun f = "$result = " ++ p_2 ++ "; " ++ "$result." ++ add ++ "(" ++ p_1 ++ ");" | isCoercion f = "$result = " ++ p_1 ++ ";" | isDefinedRule f = "$result = " ++ packageAbsyn ++ "Def." ++ sanitize (funName f) ++ "(" ++ intercalate "," (map resultvalue ms) ++ ");" | otherwise = "$result = new " ++ c ++ "(" ++ intercalate "," (map resultvalue ms) ++ ");" where sanitize = getRuleName c = packageAbsyn ++ "." ++ if isNilFun f || isOneFun f || isConsFun f then identCat (normCat nt) else funName f p_1 = resultvalue $ ms!!0 p_2 = resultvalue $ ms!!1 add = if rev then "addLast" else "addFirst" removeQuotes x = x +.+ "substring(1, " ++ x +.+ "length()-1)" unescape x = removeQuotes x +.+ "translateEscapes()" -- Java 15 and higher resultvalue (n,c) = case c of TokenCat "Double" -> concat [ "Double.parseDouble(", txt, ")" ] TokenCat "Integer" -> concat [ "Integer.parseInt(" , txt, ")" ] TokenCat "Char" -> unescape txt +.+ "charAt(0)" TokenCat "String" -> unescape txt TokenCat "Ident" -> txt c | isTokenCat c -> txt | otherwise -> concat [ "$", n, ".result" ] where txt = '$':n +.+ "getText()" -- | Generate patterns and a set of metavariables indicating -- where in the pattern the non-terminal -- >>> generatePatterns 2 [] $ npRule "myfun" (Cat "A") [] Parsable -- (" /* empty */ ",[]) -- >>> generatePatterns 3 [("def", "_SYMB_1")] $ npRule "myfun" (Cat "A") [Right "def", Left (Cat "B")] Parsable -- ("_SYMB_1 p_3_2=b",[("p_3_2",B)]) generatePatterns :: Int -> KeywordEnv -> Rule -> (Pattern,[MetaVar]) generatePatterns ind env r = case rhsRule r of [] -> (" /* empty */ ", []) its -> ( unwords $ mapMaybe (uncurry mkIt) nits , [ (var i, cat) | (i, Left cat) <- nits ] ) where nits = zip [1 :: Int ..] its var i = "p_" ++ show ind ++"_"++ show i -- TODO: is ind needed for ANTLR? mkIt i = \case Left c -> Just $ var i ++ "=" ++ catToNT c Right s -> lookup s env catToNT :: Cat -> String catToNT = \case TokenCat "Ident" -> "IDENT" TokenCat "Integer" -> "INTEGER" TokenCat "Char" -> "CHAR" TokenCat "Double" -> "DOUBLE" TokenCat "String" -> "STRING" c | isTokenCat c -> identCat c | otherwise -> firstLowerCase $ getRuleName $ identCat c -- | Puts together the pattern and actions and returns a string containing all -- the rules. prRules :: String -> Rules -> String prRules packabs = concatMap $ \case -- No rules: skip. PDef _mlhs _nt [] -> "" -- At least one rule: print! PDef mlhs nt (rhs : rhss) -> unlines $ concat -- The definition header: lhs and type. [ [ unwords [ fromMaybe nt' mlhs , "returns" , "[" , packabs+.+normcat , "result" , "]" ] ] -- The first rhs. , alternative " :" rhs -- The other rhss. , concatMap (alternative " |") rhss -- The definition footer. , [ " ;" ] ] where alternative sep (p, a, label) = concat [ [ unwords [ sep , p ] ] , [ unwords [ " {" , a , "}" ] ] , [ unwords [ " #" , antlrRuleLabel l ] | Just l <- [label] ] ] catid = identCat nt normcat = identCat (normCat nt) nt' = getRuleName $ firstLowerCase catid antlrRuleLabel :: Fun -> String antlrRuleLabel fnc | isNilFun fnc = catid ++ "_Empty" | isOneFun fnc = catid ++ "_AppendLast" | isConsFun fnc = catid ++ "_PrependFirst" | isCoercion fnc = "Coercion_" ++ catid | otherwise = getLabelName fnc BNFC-2.9.5/src/BNFC/Backend/Java/CFtoComposVisitor.hs0000644000000000000000000001036607346545000020075 0ustar0000000000000000{- BNF Converter: Java 1.5 Compositional Vistor generator Copyright (C) 2006 Bjorn Bringert Based on CFtoVisitSkel.hs, Copyright (C) 2004-2006 Michael Pellauer -} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Java.CFtoComposVisitor (cf2ComposVisitor) where import Prelude hiding ((<>)) import Data.List (intercalate) import Data.Either (lefts) import BNFC.CF import BNFC.Backend.Java.CFtoJavaAbs15 (typename) import BNFC.Utils ((+++)) import BNFC.Backend.Common.NamedVariables import BNFC.PrettyPrint cf2ComposVisitor :: String -> String -> CF -> String cf2ComposVisitor packageBase packageAbsyn cf = concat [ header , intercalate "\n" $ map (prData packageAbsyn user) groups , "}" ] where user = map fst $ tokenPragmas cf groups = [ g | g@(c,_) <- fixCoercions (ruleGroupsInternals cf) , not (isList c) ] is = map (prInterface packageAbsyn) groups header = unlines $ concat [ [ "package" +++ packageBase ++ ";" , "/** Composition Visitor" , "*/" , "" , "public class ComposVisitor" ++ if null is then "" else " implements" ] , [ intercalate ",\n" $ map (" " ++) is | not $ null is ] , [ "{" ] ] prInterface :: String -> (Cat, [Rule]) -> String prInterface packageAbsyn (cat, _) = q ++ ".Visitor<" ++ q ++ ",A>" where q = packageAbsyn ++ "." ++ identCat cat -- | Traverses a category based on its type. prData :: String -> [UserDef] -> (Cat, [Rule]) -> String prData packageAbsyn user (cat, rules) = unlines [ " /* " ++ identCat cat ++ " */" , render $ vcat $ map (prRule packageAbsyn user cat) rules ] -- | Traverses a standard rule. -- -- >>> prRule "lang.absyn" ["A"] (Cat "B") $ npRule "F" (Cat "B") [Left (Cat "A"), Right "+", Left (ListCat (Cat "B"))] Parsable -- public lang.absyn.B visit(lang.absyn.F p, A arg) -- { -- String a_ = p.a_; -- lang.absyn.ListB listb_ = new lang.absyn.ListB(); -- for (lang.absyn.B x : p.listb_) -- { -- listb_.add(x.accept(this,arg)); -- } -- return new lang.absyn.F(a_, listb_); -- } prRule :: IsFun f => String -> [UserDef] -> Cat -> Rul f -> Doc prRule packageAbsyn user cat (Rule fun _ cats _) | not (isCoercion fun || isDefinedRule fun) = nest 4 $ vcat [ "public " <> qual (identCat cat) <> " visit(" <> cls <> " p, A arg)" , codeblock 2 [ vcat (map (prCat packageAbsyn user) cats') , "return new" <+> cls <> parens (hsep (punctuate "," vnames)) <> ";" ] ] where cats' = lefts $ numVars cats cls = qual $ funName fun qual s = text (packageAbsyn ++ "." ++ s) vnames = map snd cats' prRule _ _ _ _ = empty -- | Traverses a class's instance variables. -- -- >>> prCat "lang.absyn" ["A"] (Cat "A", "a_") -- String a_ = p.a_; -- -- >>> prCat "lang.absyn" [] (ListCat (Cat "Integer"), "listinteger_") -- lang.absyn.ListInteger listinteger_ = p.listinteger_; -- -- >>> prCat "lang.absyn" [] (ListCat (Cat "N"), "listn_") -- lang.absyn.ListN listn_ = new lang.absyn.ListN(); -- for (lang.absyn.N x : p.listn_) -- { -- listn_.add(x.accept(this,arg)); -- } -- -- >>> prCat "lang.absyn" [] (Cat "N", "n_") -- lang.absyn.N n_ = p.n_.accept(this, arg); prCat :: String -- ^ Name of package for abstract syntax. -> [UserDef] -- ^ User defined token categories. -> (Cat, Doc) -- ^ Variable category and names. -> Doc -- ^ Code for visiting the variable. prCat packageAbsyn user (cat, nt) | isBasicType user varType || (isList cat && isBasicType user et) = decl var | isList cat = vcat [ decl ("new" <+> text varType <> "()") , "for (" <> text et <> " x : " <> var <> ")" , codeblock 2 [ nt <> ".add(x.accept(this,arg));" ] ] | otherwise = decl (var <> ".accept(this, arg)") where var = "p." <> nt varType = typename packageAbsyn user $ identCat $ normCat cat et = typename packageAbsyn user $ identCat $ normCatOfList cat decl v = text varType <+> nt <+> "=" <+> v <> ";" -- qual s = text (packageAbsyn ++ "." ++ s) -- | Just checks if something is a basic or user-defined type. isBasicType :: [UserDef] -> String -> Bool isBasicType user v = v `elem` (user ++ ["Integer","Character","String","Double"]) BNFC-2.9.5/src/BNFC/Backend/Java/CFtoCup15.hs0000644000000000000000000002146307346545000016152 0ustar0000000000000000{- BNF Converter: Java 1.5 Cup Generator Copyright (C) 2004 Author: Markus Forsberg, Michael Pellauer, Bjorn Bringert Description : This module generates the CUP input file. It follows the same basic structure of CFtoHappy. Author : Michael Pellauer Bjorn Bringert Created : 26 April, 2003 Modified : 5 Aug, 2004 -} module BNFC.Backend.Java.CFtoCup15 ( cf2Cup ) where import Data.List (intercalate) import BNFC.CF import BNFC.Options (RecordPositions(..)) import BNFC.Utils ( (+++) ) import BNFC.Backend.Common.NamedVariables import BNFC.Backend.Java.Utils ( getRuleName ) type Rules = [(NonTerminal,[(Pattern,Action)])] type Pattern = String type Action = String type MetaVar = String --The environment comes from the CFtoJLex cf2Cup :: String -> String -> CF -> RecordPositions -> KeywordEnv -> String cf2Cup packageBase packageAbsyn cf rp env = unlines [ header , declarations packageAbsyn (allParserCats cf) , tokens env , specialToks cf , specialRules cf , prEntryPoint cf , prRules (rulesForCup packageAbsyn cf rp env) ] where header :: String header = unlines [ "// Parser definition for use with Java Cup" , "package" +++ packageBase ++ ";" , "" , "action code {:" , "public java_cup.runtime.ComplexSymbolFactory.Location getLeftLocation(" , " java_cup.runtime.ComplexSymbolFactory.Location ... locations) {" , " for (java_cup.runtime.ComplexSymbolFactory.Location l : locations) {" , " if (l != null) {" , " return l;" , " }" , " }" , " return null;" , "}" , ":}" , "parser code {:" , parseMethod packageAbsyn (firstEntry cf) , "public void syntax_error(java_cup.runtime.Symbol cur_token)" , "{" , " report_error(\"Syntax Error, trying to recover and continue" ++ " parse...\", cur_token);" , "}" , "" , "public void unrecovered_syntax_error(java_cup.runtime.Symbol " ++ "cur_token) throws java.lang.Exception" , "{" , " throw new Exception(\"Unrecoverable Syntax Error\");" , "}" , "" , ":}" ] -- peteg: FIXME JavaCUP can only cope with one entry point AFAIK. prEntryPoint :: CF -> String prEntryPoint cf = unlines ["", "start with " ++ identCat (firstEntry cf) ++ ";", ""] -- [ep] -> unlines ["", "start with " ++ ep ++ ";", ""] -- eps -> error $ "FIXME multiple entry points." ++ show eps --This generates a parser method for each entry point. parseMethod :: String -> Cat -> String parseMethod packageAbsyn cat = unlines [ " public" +++ packageAbsyn ++ "." ++ dat +++ "p" ++ cat' ++ "()" ++ " throws Exception" , " {" , " java_cup.runtime.Symbol res = parse();" , " return (" ++ packageAbsyn ++ "." ++ dat ++ ") res.value;" , " }" ] where dat = identCat (normCat cat) cat' = identCat cat --non-terminal types declarations :: String -> [Cat] -> String declarations packageAbsyn ns = unlines (map (typeNT packageAbsyn) ns) where typeNT _nm nt = "nonterminal" +++ packageAbsyn ++ "." ++ identCat (normCat nt) +++ identCat nt ++ ";" --terminal types tokens :: KeywordEnv -> String tokens ts = unlines (map declTok ts) where declTok (s,r) = "terminal" +++ r ++ "; // " ++ s specialToks :: CF -> String specialToks cf = unlines [ ifC catString "terminal String _STRING_;" , ifC catChar "terminal Character _CHAR_;" , ifC catInteger "terminal Integer _INTEGER_;" , ifC catDouble "terminal Double _DOUBLE_;" , ifC catIdent "terminal String _IDENT_;" ] where ifC cat s = if isUsedCat cf (TokenCat cat) then s else "" specialRules:: CF -> String specialRules cf = unlines ["terminal String " ++ name ++ ";" | name <- tokenNames cf] --The following functions are a (relatively) straightforward translation --of the ones in CFtoHappy.hs rulesForCup :: String -> CF -> RecordPositions -> KeywordEnv -> Rules rulesForCup packageAbsyn cf rp env = map mkOne $ ruleGroups cf where mkOne (cat,rules) = constructRule packageAbsyn cf rp env rules cat -- | For every non-terminal, we construct a set of rules. A rule is a sequence of -- terminals and non-terminals, and an action to be performed. constructRule :: String -> CF -> RecordPositions -> KeywordEnv -> [Rule] -> NonTerminal -> (NonTerminal,[(Pattern,Action)]) constructRule packageAbsyn cf rp env rules nt = (nt, [ (p, generateAction packageAbsyn nt (funName $ funRule r) (revM b m) b rp) | r0 <- rules, let (b,r) = if isConsFun (funRule r0) && elem (valCat r0) revs then (True, revSepListRule r0) else (False, r0) (p,m) = generatePatterns env r]) where revM False = id revM True = reverse revs = cfgReversibleCats cf -- Generates a string containing the semantic action. generateAction :: String -> NonTerminal -> Fun -> [MetaVar] -> Bool -- ^ Whether the list should be reversed or not. -- Only used if this is a list rule. -> RecordPositions -- ^ Record line and column info. -> Action generateAction packageAbsyn nt fun ms rev rp | isNilFun f = "RESULT = new " ++ c ++ "();" | isOneFun f = "RESULT = new " ++ c ++ "(); RESULT.addLast(" ++ p_1 ++ ");" | isConsFun f = "RESULT = " ++ p_2 ++ "; " ++ p_2 ++ "." ++ add ++ "(" ++ p_1 ++ ");" | isCoercion f = "RESULT = " ++ p_1 ++ ";" | isDefinedRule f = "RESULT = " ++ packageAbsyn ++ "Def." ++ sanitize f ++ "(" ++ intercalate "," ms ++ ");" | otherwise = "RESULT = new " ++ c ++ "(" ++ intercalate "," ms ++ ");" ++ lineInfo where sanitize = getRuleName f = funName fun c = packageAbsyn ++ "." ++ if isNilFun f || isOneFun f || isConsFun f then identCat (normCat nt) else f p_1 = ms !! 0 p_2 = ms !! 1 add = if rev then "addLast" else "addFirst" lineInfo = if rp == RecordPositions then case ms of [] -> "\n((" ++ c ++ ")RESULT).line_num = -1;" ++ "\n((" ++ c ++ ")RESULT).col_num = -1;" ++ "\n((" ++ c ++ ")RESULT).offset = -1;" _ -> "\njava_cup.runtime.ComplexSymbolFactory.Location leftLoc = getLeftLocation(" ++ intercalate "," (map (++"xleft") ms) ++ ");" ++ "\nif (leftLoc != null) {" ++ "\n ((" ++ c ++ ")RESULT).line_num = leftLoc.getLine();" ++ "\n ((" ++ c ++ ")RESULT).col_num = leftLoc.getColumn();" ++ "\n ((" ++ c ++ ")RESULT).offset = leftLoc.getOffset();" ++ "\n} else {" ++ "\n ((" ++ c ++ ")RESULT).line_num = -1;" ++ "\n ((" ++ c ++ ")RESULT).col_num = -1;" ++ "\n ((" ++ c ++ ")RESULT).offset = -1;" ++ "\n}" else "" -- | Generate patterns and a set of metavariables indicating -- where in the pattern the non-terminal. -- -- >>> generatePatterns [] (npRule "myfun" (Cat "A") [] Parsable) -- (" /* empty */ ",[]) -- -- >>> generatePatterns [("def", "_SYMB_1")] (npRule "myfun" (Cat "A") [Right "def", Left (Cat "B")] Parsable) -- ("_SYMB_1:p_1 B:p_2 ",["p_2"]) generatePatterns :: KeywordEnv -> Rule -> (Pattern,[MetaVar]) generatePatterns env r = case rhsRule r of [] -> (" /* empty */ ", []) its -> (mkIt 1 its, metas its) where mkIt _ [] = [] mkIt n (i:is) = case i of Left c -> c' ++ ":p_" ++ show (n :: Int) +++ mkIt (n+1) is where c' = case c of TokenCat "Ident" -> "_IDENT_" TokenCat "Integer" -> "_INTEGER_" TokenCat "Char" -> "_CHAR_" TokenCat "Double" -> "_DOUBLE_" TokenCat "String" -> "_STRING_" _ -> identCat c Right s -> case lookup s env of Just x -> (x ++ ":p_" ++ show (n :: Int)) +++ mkIt (n+1) is Nothing -> mkIt n is metas its = ["p_" ++ show i | (i,Left _) <- zip [1 :: Int ..] its] -- We have now constructed the patterns and actions, -- so the only thing left is to merge them into one string. prRules :: Rules -> String prRules [] = [] prRules ((_ , [] ) : rs) = prRules rs --internal rule prRules ((nt, (p,a):ls) : rs) = unwords [ nt', "::=", p, "{:", a, ":}", '\n' : pr ls ] ++ ";\n" ++ prRules rs where nt' = identCat nt pr [] = [] pr ((p,a):ls) = unlines [ unwords [ " |", p, "{:", a , ":}" ] ] ++ pr ls BNFC-2.9.5/src/BNFC/Backend/Java/CFtoFoldVisitor.hs0000644000000000000000000000607207346545000017520 0ustar0000000000000000{- BNF Converter: Java 1.5 Fold Vistor generator Copyright (C) 2006 Bjorn Bringert Based on CFtoVisitSkel.hs, Copyright (C) 2004-2006 Michael Pellauer -} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Java.CFtoFoldVisitor (cf2FoldVisitor) where import Prelude hiding ((<>)) import BNFC.CF import BNFC.Backend.Java.CFtoJavaAbs15 (typename) import BNFC.Utils ((+++)) import BNFC.Backend.Common.NamedVariables import Data.Either (lefts) import BNFC.PrettyPrint cf2FoldVisitor :: String -> String -> CF -> String cf2FoldVisitor packageBase packageAbsyn cf = unlines ["package" +++ packageBase ++ ";", "", "/** Fold Visitor */", "public abstract class FoldVisitor implements AllVisitor {", " public abstract R leaf(A arg);", " public abstract R combine(R x, R y, A arg);", "", concatMap (prData packageAbsyn user) groups, "}"] where user = fst (unzip (tokenPragmas cf)) groups = [ g | g@(c,_) <- fixCoercions (ruleGroupsInternals cf), not (isList c) ] --Traverses a category based on its type. prData :: String -> [UserDef] -> (Cat, [Rule]) -> String prData packageAbsyn user (cat, rules) = unlines [ "/* " ++ identCat cat ++ " */" , concatMap (prRule packageAbsyn user cat) rules ] --traverses a standard rule. prRule :: String -> [UserDef] -> Cat -> Rule -> String prRule packageAbsyn user _ (Rule fun _ cats _) | not (isCoercion fun || isDefinedRule fun) = unlines $ [" public R visit(" ++ cls ++ " p, A arg) {", " R r = leaf(arg);"] ++ map (" "++) visitVars ++ [" return r;", " }"] where cats' = lefts $ numVars cats cls = packageAbsyn ++ "." ++ funName fun visitVars = lines $ render $ vcat $ map (prCat packageAbsyn user) cats' prRule _ _ _ _ = "" -- | Traverses a class's instance variables. -- >>> prCat "" ["A"] (Cat "A", "a_") -- -- >>> prCat "" [] (ListCat (Cat "Integer"), "listinteger_") -- -- >>> prCat "absyn" [] (ListCat (Cat "N"), "listn_") -- for (absyn.N x : p.listn_) -- { -- r = combine(x.accept(this, arg), r, arg); -- } -- >>> prCat "absyn" [] (Cat "N", "n_") -- r = combine(p.n_.accept(this, arg), r, arg); prCat :: String -- ^ Absyn package name. -> [UserDef] -- ^ User-defined token categories. -> (Cat, Doc) -- ^ Variable category and name -> Doc -- ^ Code for visiting the variable prCat packageAbsyn user (cat,nt) | isBasicType user varType || (isList cat && isBasicType user et) = empty | isList cat = vcat [ "for (" <> text et <> " x : " <> var <> ")" , codeblock 2 [ "r = combine(x.accept(this, arg), r, arg);" ] ] | otherwise = "r = combine(" <> var <> ".accept(this, arg), r, arg);" where var = "p." <> nt varType = typename packageAbsyn user $ identCat $ normCat cat et = typename packageAbsyn user $ identCat $ normCatOfList cat --Just checks if something is a basic or user-defined type. isBasicType :: [UserDef] -> String -> Bool isBasicType user v = v `elem` (user ++ ["Integer","Character","String","Double"]) BNFC-2.9.5/src/BNFC/Backend/Java/CFtoJLex15.hs0000644000000000000000000003024407346545000016262 0ustar0000000000000000{-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TupleSections #-} {- BNF Converter: Java JLex generator Copyright (C) 2004 Author: Michael Pellauer Description : This module generates the JLex input file. This file is quite different than Alex or Flex. Author : Michael Pellauer Bjorn Bringert Created : 25 April, 2003 Modified : 4 Nov, 2004 -} module BNFC.Backend.Java.CFtoJLex15 ( cf2jlex ) where import Prelude hiding ((<>)) import BNFC.CF import BNFC.Backend.Common.NamedVariables import BNFC.Backend.C.CFtoFlexC ( commentStates ) import BNFC.Backend.Java.RegToJLex import BNFC.Options ( JavaLexerParser(..), RecordPositions(..) ) import BNFC.Utils ( cstring ) import Text.PrettyPrint -- | The environment is returned for further use in the parser. cf2jlex :: JavaLexerParser -> RecordPositions -> String -> CF -> (Doc, SymEnv) cf2jlex jflex rp packageBase cf = (, env) . vcat $ [ prelude jflex rp packageBase , cMacros cf , lexSymbols jflex env , restOfJLex jflex rp cf ] where env = zipWith (\ s n -> (s, "_SYMB_" ++ show n)) (cfgSymbols cf ++ reservedWords cf) [(0 :: Int)..] -- | File prelude. prelude :: JavaLexerParser -> RecordPositions -> String -> Doc prelude jflex rp packageBase = vcat [ hsep [ "// Lexer definition for use with", lexerName ] , "package" <+> text packageBase <> ";" , "" , "import java_cup.runtime.SymbolFactory;" , "import java_cup.runtime.ComplexSymbolFactory;" , "%%" , "%cup" , "%unicode" , (if rp == RecordPositions then vcat [ "%line" , (if jflex == JFlexCup then "%column" else "") , "%char" ] else "") , "%public" , "%{" , nest 2 $ vcat [ "String pstring = new String();" , "final int unknown = -1;" , "ComplexSymbolFactory.Location left = new ComplexSymbolFactory.Location(unknown, unknown);" , "ComplexSymbolFactory cf = new ComplexSymbolFactory();" , "public SymbolFactory getSymbolFactory() { return cf; }" , positionDeclarations , "public int line_num() { return (yyline+1); }" , "public ComplexSymbolFactory.Location left_loc() {" , if rp == RecordPositions then " return new ComplexSymbolFactory.Location(yyline+1, yycolumn+1, (int)yychar);" else " return left;" , "}" , "public ComplexSymbolFactory.Location right_loc() {" , " ComplexSymbolFactory.Location left = left_loc();" , (if rp == RecordPositions then "return new ComplexSymbolFactory.Location(left.getLine(), left.getColumn()+yylength(), (int)(left.getOffset()+yylength()));" else "return left;") , "}" , "public String buff()" <+> braces (if jflex == JFlexCup then "return new String(zzBuffer,zzCurrentPos,10).trim();" else "return new String(yy_buffer,yy_buffer_index,10).trim();") ] , "%}" , if jflex /= JFlexCup then vcat ["%eofval{" , " return cf.newSymbol(\"EOF\", sym.EOF, left_loc(), left_loc());" , "%eofval}"] else "" ] where lexerName = case jflex of JFlexCup -> "JFlex" JLexCup -> "JLex" Antlr4 -> undefined positionDeclarations -- JFlex always defines yyline, yychar, yycolumn, even if unused. | jflex == JFlexCup = "" | rp == RecordPositions = "int yycolumn = unknown - 1;" | otherwise = vcat -- subtract one so that one based numbering still ends up with unknown. [ "int yyline = unknown - 1;" , "int yycolumn = unknown - 1;" , "int yychar = unknown;" ] --For now all categories are included. --Optimally only the ones that are used should be generated. cMacros :: CF -> Doc cMacros cf = vcat $ concat [ [ "LETTER = ({CAPITAL}|{SMALL})" , "CAPITAL = [A-Z\\xC0-\\xD6\\xD8-\\xDE]" , "SMALL = [a-z\\xDF-\\xF6\\xF8-\\xFF]" , "DIGIT = [0-9]" , "IDENT = ({LETTER}|{DIGIT}|['_])" ] , map (text . ("%state " ++)) $ take (numberOfBlockCommentForms cf) commentStates , [ "%state CHAR" , "%state CHARESC" , "%state CHAREND" , "%state STRING" , "%state ESCAPED" , "%%" ] ] -- | -- >>> lexSymbols JLexCup [("foo","bar")] -- foo { return cf.newSymbol("", sym.bar, left_loc(), right_loc()); } -- -- >>> lexSymbols JLexCup [("\\","bar")] -- \\ { return cf.newSymbol("", sym.bar, left_loc(), right_loc()); } -- -- >>> lexSymbols JLexCup [("/","bar")] -- / { return cf.newSymbol("", sym.bar, left_loc(), right_loc()); } -- -- >>> lexSymbols JFlexCup [("/","bar")] -- \/ { return cf.newSymbol("", sym.bar, left_loc(), right_loc()); } -- -- >>> lexSymbols JFlexCup [("~","bar")] -- \~ { return cf.newSymbol("", sym.bar, left_loc(), right_loc()); } -- lexSymbols :: JavaLexerParser -> SymEnv -> Doc lexSymbols jflex ss = vcat $ map transSym ss where transSym (s,r) = "" <> text (escapeChars s) <> " { return cf.newSymbol(\"\", sym." <> text r <> ", left_loc(), right_loc()); }" --Helper function that escapes characters in strings escapeChars :: String -> String escapeChars = concatMap (escapeChar jflex) restOfJLex :: JavaLexerParser -> RecordPositions -> CF -> Doc restOfJLex jflex rp cf = vcat [ lexComments (comments cf) , "" , userDefTokens , ifC catString strStates , ifC catChar chStates , ifC catDouble "{DIGIT}+\".\"{DIGIT}+(\"e\"(\\-)?{DIGIT}+)? { return cf.newSymbol(\"\", sym._DOUBLE_, left_loc(), right_loc(), Double.valueOf(yytext())); }" , ifC catInteger "{DIGIT}+ { return cf.newSymbol(\"\", sym._INTEGER_, left_loc(), right_loc(), Integer.valueOf(yytext())); }" , ifC catIdent "{LETTER}{IDENT}* { return cf.newSymbol(\"\", sym._IDENT_, left_loc(), right_loc(), yytext().intern()); }" , "[ \\t\\r\\n\\f] { /* ignore white space. */ }" , if jflex == JFlexCup then "<> { return cf.newSymbol(\"EOF\", sym.EOF, left_loc(), left_loc()); }" else "" , if rp == RecordPositions then ". { throw new Error(\"Illegal Character <\"+yytext()+\"> at \"+(yyline+1)" <> (if jflex == JFlexCup then "+\":\"+(yycolumn+1)+\"(\"+yychar+\")\"" else "") <> "); }" else ". { throw new Error(\"Illegal Character <\"+yytext()+\">\"); }" ] where ifC :: TokenCat -> Doc -> Doc ifC cat s = if isUsedCat cf (TokenCat cat) then s else "" userDefTokens = vcat [ "" <> text (printRegJLex jflex exp) <+> "{ return cf.newSymbol(\"\", sym." <> text name <> ", left_loc(), right_loc(), yytext().intern()); }" | (name, exp) <- tokenPragmas cf ] strStates = vcat --These handle escaped characters in Strings. [ "\"\\\"\" { left = left_loc(); yybegin(STRING); }" , "\\\\ { yybegin(ESCAPED); }" , "\\\" { String foo = pstring; pstring = new String(); yybegin(YYINITIAL); return cf.newSymbol(\"\", sym._STRING_, left, right_loc(), foo.intern()); }" , ". { pstring += yytext(); }" , "\\r\\n|\\r|\\n { throw new Error(\"Unterminated string on line \" + left.getLine() " <> (if jflex == JFlexCup then "+ \" begining at column \" + left.getColumn()" else "") <> "); }" , if jflex == JFlexCup then "<> { throw new Error(\"Unterminated string at EOF, beginning at \" + left.getLine() + \":\" + left.getColumn()); }" else "" , "n { pstring += \"\\n\"; yybegin(STRING); }" , "t { pstring += \"\\t\"; yybegin(STRING); }" , "r { pstring += \"\\r\"; yybegin(STRING); }" , "f { pstring += \"\\f\"; yybegin(STRING); }" , "\\\" { pstring += \"\\\"\"; yybegin(STRING); }" , "\\\\ { pstring += \"\\\\\"; yybegin(STRING); }" , ". { pstring += yytext(); yybegin(STRING); }" , "\\r\\n|\\r|\\n { throw new Error(\"Unterminated string on line \" + left.getLine() " <> (if jflex == JFlexCup then "+ \" beginning at column \" + left.getColumn()" else "") <> "); }" , if jflex == JFlexCup then "<> { throw new Error(\"Unterminated string at EOF, beginning at \" + left.getLine() + \":\" + left.getColumn()); }" else "" ] chStates = vcat --These handle escaped characters in Chars. [ "\"'\" { left = left_loc(); yybegin(CHAR); }" , "\\\\ { yybegin(CHARESC); }" , "[^'] { yybegin(CHAREND); return cf.newSymbol(\"\", sym._CHAR_, left, right_loc(), Character.valueOf(yytext().charAt(0))); }" , "\\r\\n|\\r|\\n { throw new Error(\"Unterminated character literal on line \" + left.getLine() " <> (if jflex == JFlexCup then "+ \" beginning at column \" + left.getColumn()" else "") <> "); }" , if jflex == JFlexCup then "<> { throw new Error(\"Unterminated character literal at EOF, beginning at \" + left.getLine() + \":\" + left.getColumn()); }" else "" , "n { yybegin(CHAREND); return cf.newSymbol(\"\", sym._CHAR_, left, right_loc(), Character.valueOf('\\n')); }" , "t { yybegin(CHAREND); return cf.newSymbol(\"\", sym._CHAR_, left, right_loc(), Character.valueOf('\\t')); }" , "r { yybegin(CHAREND); return cf.newSymbol(\"\", sym._CHAR_, left, right_loc(), Character.valueOf('\\r')); }" , "f { yybegin(CHAREND); return cf.newSymbol(\"\", sym._CHAR_, left, right_loc(), Character.valueOf('\\f')); }" , ". { yybegin(CHAREND); return cf.newSymbol(\"\", sym._CHAR_, left, right_loc(), Character.valueOf(yytext().charAt(0))); }" , "\\r\\n|\\r|\\n { throw new Error(\"Unterminated character literal on line \" + left.getLine() " <> (if jflex == JFlexCup then "+ \" beginning at column \" + left.getColumn()" else "") <> "); }" , if jflex == JFlexCup then "<> { throw new Error(\"Unterminated character literal at EOF, beginning at \" + left.getLine() + \":\" + left.getColumn()); }" else "" , "\"'\" {yybegin(YYINITIAL);}" , "\\r\\n|\\r|\\n { throw new Error(\"Unterminated character literal on line \" + left.getLine() " <> (if jflex == JFlexCup then "+ \" beginning at column \" + left.getColumn()" else "") <> "); }" , if jflex == JFlexCup then "<> { throw new Error(\"Unterminated character literal at EOF, beginning at \" + left.getLine() + \":\" + left.getColumn()); }" else "" ] lexComments :: ([(String, String)], [String]) -> Doc lexComments (m,s) = vcat $ concat [ map lexSingleComment s , zipWith lexMultiComment m commentStates ] -- | Create lexer rule for single-line comments. -- -- >>> lexSingleComment "--" -- "--"[^\n]* { /* skip */ } -- -- >>> lexSingleComment "\"" -- "\""[^\n]* { /* skip */ } lexSingleComment :: String -> Doc lexSingleComment c = "" <> cstring c <> "[^\\n]* { /* skip */ }" -- | Create lexer rule for multi-lines comments. -- -- There might be a possible bug here if a language includes 2 multi-line -- comments. They could possibly start a comment with one character and end it -- with another. However this seems rare. -- -- >>> lexMultiComment ("{-", "-}") "COMMENT" -- "{-" { yybegin(COMMENT); } -- "-}" { yybegin(YYINITIAL); } -- . { /* skip */ } -- [\n] { /* skip */ } -- -- >>> lexMultiComment ("\"'", "'\"") "COMMENT" -- "\"'" { yybegin(COMMENT); } -- "'\"" { yybegin(YYINITIAL); } -- . { /* skip */ } -- [\n] { /* skip */ } -- lexMultiComment :: (String, String) -> String -> Doc lexMultiComment (b,e) comment = vcat [ "" <> cstring b <+> "{ yybegin(" <> text comment <> "); }" , commentTag <> cstring e <+> "{ yybegin(YYINITIAL); }" , commentTag <> ". { /* skip */ }" , commentTag <> "[\\n] { /* skip */ }" ] where commentTag = text $ "<" ++ comment ++ ">" BNFC-2.9.5/src/BNFC/Backend/Java/CFtoJavaAbs15.hs0000644000000000000000000003240507346545000016730 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TupleSections #-} {- BNF Converter: Java 1.5 Abstract Syntax Copyright (C) 2004 Author: Michael Pellauer, Bjorn Bringert Description : This module generates the Java Abstract Syntax It uses the BNFC.Backend.Common.NamedVariables module for variable naming. It returns a list of file names, and the contents to be written into that file. (In Java public classes must go in their own file.) The generated classes also support the Visitor Design Pattern. Author : Michael Pellauer Bjorn Bringert Created : 24 April, 2003 Modified : 16 June, 2004 -} module BNFC.Backend.Java.CFtoJavaAbs15 (cf2JavaAbs, typename, cat2JavaType) where import Data.Bifunctor ( first ) import Data.Char ( isUpper, toLower ) import Data.Function ( on ) import Data.List ( findIndices, intercalate ) import Data.Maybe ( mapMaybe ) import System.FilePath ( () ) import Text.PrettyPrint as P import BNFC.CF import BNFC.Options ( RecordPositions(..) ) import BNFC.TypeChecker ( buildContext, ctxTokens, isToken ) import BNFC.Utils ( (+++), (++++), unless ) import BNFC.Backend.Common.NamedVariables ( UserDef, showNum ) import BNFC.Backend.Java.Utils ( getRuleName ) --Produces abstract data types in Java. --These follow Appel's "non-object oriented" version. --They also allow users to use the Visitor design pattern. type IVar = (String, Int, String) -- ^ The type of an instance variable, -- a number unique to that type, -- and an optional name (handles typedefs). -- | The result is a list of files (without file extension) -- which must be written to disk. -- The tuple is (FileName, FileContents) cf2JavaAbs :: FilePath -- ^ Directory for AST without trailing 'pathSeparator'. -> String -> String -> CF -> RecordPositions -> [(FilePath, String)] cf2JavaAbs dirAbsyn packageBase packageAbsyn cf rp = concat [ unless (null defs) [ (dirAbsyn ++ "Def", unlines deftext) ] , map (first mkPath) $ concatMap (prData rp header packageAbsyn user) rules ] where header = "package " ++ packageAbsyn ++ ";\n" user = [ n | (n,_) <- tokenPragmas cf ] rules = getAbstractSyntax cf defs = definitions cf deftext= concat [ [ "package " ++ packageBase ++ ";" , "" , "public class AbsynDef {" , "" , " public static > A cons(B x, A xs) {" , " xs.addFirst(x);" , " return xs;" , " }" , "" ] , definedRules defs packageAbsyn cf , [ "}"] ] mkPath :: String -> FilePath mkPath s = dirAbsyn s definedRules :: [Define] -> String -> CF -> [String] definedRules defs packageAbsyn cf = map rule defs where ctx = buildContext cf rule (Define f args e t) = unlines $ map (" " ++) $ [ "public static " ++ javaType t ++ " " ++ sanitize (funName f) ++ "(" ++ intercalate ", " (map javaArg args) ++ ") {" , " return " ++ javaExp (map fst args) e ++ ";" , "}" ] where sanitize = getRuleName javaType :: Base -> String javaType = \case ListT (BaseT x) -> concat [ packageAbsyn, ".List", x ] BaseT x -> typename packageAbsyn (ctxTokens ctx) x ListT ListT{} -> undefined -- ListT t -> javaType t -- undefined javaArg :: (String, Base) -> String javaArg (x,t) = javaType t ++ " " ++ x javaExp :: [String] -> Exp -> String javaExp args = \case Var x -> x -- argument App "[]" (FunT _ t) [] -> callQ (identType t) [] App "(:)" _ es -> call "cons" es App t _ [e] | isToken t ctx -> javaExp args e -- wraps new String App x _ es | isUpper (head x) -> callQ x es | otherwise -> call (sanitize x) es -- -- | x `elem` args -> call x es LitInt n -> "Integer.valueOf(" ++ show n ++ ")" LitDouble x -> "Double.valueOf(" ++ show x ++ ")" LitChar c -> "Character.valueOf(" ++ show c ++ ")" LitString s -> "String.valueOf(" ++ show s ++ ")" where call x es = x ++ "(" ++ intercalate ", " (map (javaExp args) es) ++ ")" callQ = call . qualify qualify x = "new " ++ packageAbsyn ++ "." ++ x -- | Generates a (possibly abstract) category class, and classes for all its rules. prData :: RecordPositions -> String -> String -> [UserDef] -> Data ->[(String, String)] prData rp header packageAbsyn user (cat, rules) = categoryClass ++ mapMaybe (prRule rp header packageAbsyn funs user cat) rules where funs = map fst rules categoryClass | catToStr cat `elem` funs = [] -- the catgory is also a function, skip abstract class | otherwise = [(cls, header ++++ unlines [ "public abstract class" +++ cls +++ "implements java.io.Serializable {", " public abstract R accept(" ++ cls ++ ".Visitor v, A arg);", prVisitor packageAbsyn funs, "}" ])] where cls = identCat cat prVisitor :: String -> [String] -> String prVisitor packageAbsyn funs = unlines [ " public interface Visitor {", unlines (map prVisitFun funs), " }" ] where prVisitFun f = " public R visit(" ++ packageAbsyn ++ "." ++ f ++ " p, A arg);" -- | Generates classes for a rule, depending on what type of rule it is. prRule :: RecordPositions -- ^ Include line number info in generated classes. -> String -- ^ Header. -> String -- ^ Abstract syntax package name. -> [String] -- ^ Names of all constructors in the category. -> [UserDef] -> Cat -> (Fun, [Cat]) -> Maybe (String, String) prRule rp h packageAbsyn funs user c (fun, cats) | isNilFun fun || isOneFun fun = Nothing -- these are not represented in the Absyn | isConsFun fun = Just . (fun',) . unlines $ -- this is the linked list case. [ h , unwords [ "public class", fun', "extends", cat2JavaTypeTopList user c, "{" ] , "}" ] | otherwise = Just . (fun,) . unlines $ -- a standard rule [ h , unwords [ "public class", fun, ext, "{" ] , render $ nest 2 $ vcat [ prInstVars rp vs , prConstructor fun user vs cats ] , prAccept packageAbsyn c fun , prEquals packageAbsyn fun vs , prHashCode packageAbsyn fun vs , if isAlsoCategory then prVisitor packageAbsyn funs else "" , "}" ] where vs = getVars cats user fun' = identCat (normCat c) isAlsoCategory = fun == catToStr c --This handles the case where a LBNF label is the same as the category. ext = if isAlsoCategory then "" else " extends" +++ identCat c -- | The standard accept function for the Visitor pattern. prAccept :: String -> Cat -> String -> String prAccept pack cat _ = "\n public R accept(" ++ pack ++ "." ++ catToStr cat ++ ".Visitor v, A arg) { return v.visit(this, arg); }\n" -- | Creates the equals() method. prEquals :: String -> String -> [IVar] -> String prEquals pack fun vs = unlines $ map (" "++) $ ["public boolean equals(java.lang.Object o) {", " if (this == o) return true;", " if (o instanceof " ++ fqn ++ ") {"] ++ (if null vs then [" return true;"] else [" " ++ fqn +++ "x = ("++fqn++")o;", " return " ++ checkKids ++ ";"]) ++ [" }", " return false;", "}"] where fqn = pack++"."++fun checkKids = intercalate " && " $ map checkKid vs checkKid iv = "this." ++ v ++ ".equals(x." ++ v ++ ")" where v = render (iVarName iv) -- | Creates the hashCode() method. prHashCode :: String -> String -> [IVar] -> String prHashCode _ _ vs = unlines $ map (" "++) ["public int hashCode() {", " return " ++ hashKids vs ++ ";", "}" ] where aPrime = "37" hashKids [] = aPrime hashKids (v:vs) = hashKids_ (hashKid v) vs hashKids_ = foldl (\r v -> aPrime ++ "*" ++ "(" ++ r ++ ")+" ++ hashKid v) hashKid iv = "this." ++ render (iVarName iv) ++ ".hashCode()" -- | A class's instance variables. -- -- >>> prInstVars NoRecordPositions [("A",1,""), ("B",1,""), ("A",2,"abc")] -- public final A _1, abc_2; -- public final B _1; -- -- >>> prInstVars RecordPositions [("A",1,""), ("B",1,""), ("A",2,"abc")] -- public final A _1, abc_2; -- public final B _1; -- public int line_num, col_num, offset; prInstVars :: RecordPositions -> [IVar] -> Doc prInstVars rp [] = case rp of RecordPositions -> "public int line_num, col_num, offset;" NoRecordPositions -> empty prInstVars rp vars@((t,_,_):_) = "public" <+> "final" <+> text t <+> uniques P.<> ";" $$ prInstVars rp vs' where (uniques, vs') = prUniques t vars --these functions group the types together nicely prUniques :: String -> [IVar] -> (Doc, [IVar]) prUniques t vs = (prVars vs (findIndices (\(y,_,_) -> y == t) vs), remType t vs) prVars vs = hsep . punctuate comma . map (iVarName . (vs!!)) remType :: String -> [IVar] -> [IVar] remType _ [] = [] remType t ((t2,n,nm):ts) | t == t2 = remType t ts | otherwise = (t2,n,nm) : remType t ts -- | Convert IVar to java name. -- -- >>> iVarName ("A",1,"abc") -- abc_1 -- -- >>> iVarName ("C", 2, "") -- _2 -- -- >>> iVarName ("Integer", 0, "integer") -- integer_ iVarName :: IVar -> Doc iVarName (_,n,nm) = text (varName nm) P.<> text (showNum n) -- | The constructor just assigns the parameters to the corresponding instance -- variables. -- -- >>> prConstructor "bla" [] [("A",1,"a"),("B",1,""),("A",2,"")] [Cat "A",Cat "B", Cat "C"] -- public bla(A p1, B p2, C p3) { a_1 = p1; _ = p2; _2 = p3; } -- -- >>> prConstructor "EInt" [] [("Integer",0,"integer")] [Cat "Integer"] -- public EInt(Integer p1) { integer_ = p1; } prConstructor :: String -> [UserDef] -> [IVar] -> [Cat] -> Doc prConstructor c u vs cats = "public" <+> text c P.<> parens (interleave types params) <+> "{" <+> text (prAssigns vs params) P.<> "}" where (types, params) = unzip (prParams cats u (length cats) (length cats+1)) interleave xs ys = hsep $ punctuate "," $ zipWith ((<+>) `on` text) xs ys -- | Prints the parameters to the constructors. prParams :: [Cat] -> [UserDef] -> Int -> Int -> [(String,String)] prParams cs user n m = zipWith pr cs [m-n, m-n+1 ..] where pr c k = (typename "" user (identCat c), 'p' : show k) -- | This algorithm peeks ahead in the list so we don't use @map@ or @fold@. prAssigns :: [IVar] -> [String] -> String prAssigns [] _ = [] prAssigns _ [] = [] prAssigns ((t,n,nm):vs) (p:ps) = if n == 1 then case findIndices (\x -> case x of (l,_,_) -> l == t) vs of [] -> varName nm +++ "=" +++ p ++ ";" +++ prAssigns vs ps _ -> varName nm ++ showNum n +++ "=" +++ p ++ ";" +++ prAssigns vs ps else varName nm ++ showNum n +++ "=" +++ p ++ ";" +++ prAssigns vs ps -- | Different than the standard ''BNFC.Backend.Common.NamedVariables'' version -- because of the user-defined types. getVars :: [Cat] -> [UserDef] -> [IVar] getVars cs user = reverse $ singleToZero $ foldl addVar [] (map identCat cs) where addVar is c = (c', n, c):is where c' = typename "" user c n = maximum (1:[n'+1 | (_,n',c'') <- is, c'' == c]) singleToZero is = [ (t,n',nm) | (t,n,nm) <- is , let n' = if length [n | (_,_,n) <- is, n == nm] == 1 then 0 else n ] varName :: String -- ^ category name -> String -- ^ Variable name varName c = map toLower c ++ "_" -- | This makes up for the fact that there's no typedef in Java. typename :: String -- ^ Qualification (can be empty). -> [UserDef] -- ^ User-defined token names. -> String -- ^ Category name. -> String typename q user t | t == "Ident" = "String" | t == "Char" = "Character" | t == "Double" = "Double" | t == "Integer" = "Integer" | t == "String" = "String" | t `elem` user = "String" | null q = t | otherwise = q ++ "." ++ t -- | Print the Java type corresponding to a category. cat2JavaType :: [UserDef] -> Cat -> String cat2JavaType user = loop where loop = \case ListCat c -> "List" ++ loop c -- ListCat c -> "java.util.LinkedList<" ++ loop c ++ ">" c -> typename "" user $ identCat $ normCat c -- | Print the Java type corresponding to a category. -- The top list is printed as @java.util.LinkedList<...>@. cat2JavaTypeTopList :: [UserDef] -> Cat -> String cat2JavaTypeTopList user = \case ListCat c -> "java.util.LinkedList<" ++ cat2JavaType user c ++ ">" c -> cat2JavaType user c BNFC-2.9.5/src/BNFC/Backend/Java/CFtoJavaPrinter15.hs0000644000000000000000000003467407346545000017660 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {- BNF Converter: Java Pretty Printer generator Copyright (C) 2004 Author: Michael Pellauer, Bjorn Bringert Description : This module generates the Java Pretty Printer class. In addition, since there's no good way to display a class heirarchy (toString() doesn't count) in Java, it generates a method that displays the Abstract Syntax in a way similar to Haskell. This uses Appel's method and may serve as a useful example to those who wish to use it. Author : Michael Pellauer (pellauer@cs.chalmers.se), Bjorn Bringert (bringert@cs.chalmers.se) Created : 24 April, 2003 Modified : 9 Aug, 2004 Added string buffer for efficiency (Michael, August 03) -} module BNFC.Backend.Java.CFtoJavaPrinter15 ( cf2JavaPrinter ) where import Prelude hiding ((<>)) import Data.Bifunctor ( second ) import Data.Char ( toLower, isSpace ) import Data.Either ( lefts ) import Data.List ( intersperse ) import BNFC.CF import BNFC.PrettyPrint import BNFC.Utils ( (+++), for, unless, unlessNull, uniqOn ) import BNFC.Backend.Common ( switchByPrecedence ) import BNFC.Backend.Common.NamedVariables import BNFC.Backend.Java.CFtoJavaAbs15 --Produces the PrettyPrinter class. --It will generate two methods "print" and "show" --print is the actual pretty printer for linearization. --show produces a Haskell-style syntax that can be extremely useful --especially for testing parser correctness. cf2JavaPrinter :: String -> String -> CF -> String cf2JavaPrinter packageBase packageAbsyn cf = unlines [ header, prEntryPoints packageAbsyn cf, unlines (map (prData packageAbsyn user) groups), unlines (map (shData packageAbsyn user) groups), footer ] where user = [n | (n,_) <- tokenPragmas cf] groups = fixCoercions (ruleGroupsInternals cf) header = unlines [ "package" +++ packageBase ++ ";", "", "public class PrettyPrinter", "{", " //For certain applications increasing the initial size of the buffer may improve performance.", " private static final int INITIAL_BUFFER_SIZE = 128;", " private static final int INDENT_WIDTH = 2;", " //You may wish to change the parentheses used in precedence.", " private static final String _L_PAREN = new String(\"(\");", " private static final String _R_PAREN = new String(\")\");", prRender ] footer = unlines [ --later only include used categories " private static void pp(Integer n, int _i_) { buf_.append(n); buf_.append(\" \"); }", " private static void pp(Double d, int _i_) { buf_.append(String.format(java.util.Locale.ROOT, \"%.15g \", d)); }", " private static void pp(String s, int _i_) { buf_.append(s); buf_.append(\" \"); }", " private static void pp(Character c, int _i_) { buf_.append(\"'\" + escape(c.toString()) + \"'\"); buf_.append(\" \"); }", " private static void sh(Integer n) { render(n.toString()); }", " private static void sh(Double d) { render(String.format(java.util.Locale.ROOT, \"%.15g\", d)); }", " private static void sh(Character c) { render(\"'\" + escape(c.toString()) + \"'\"); }", " private static void sh(String s) { printQuoted(s); }", "", " private static void printQuoted(String s) { render(\"\\\"\" + escape(s) + \"\\\"\"); }", "", " public static String escape(String s) {", " if (s == null) return null;", " return s.replace(\"\\\\\", \"\\\\\\\\\")", " .replace(\"\\t\", \"\\\\t\")", " .replace(\"\\b\", \"\\\\b\")", " .replace(\"\\n\", \"\\\\n\")", " .replace(\"\\r\", \"\\\\r\")", " .replace(\"\\f\", \"\\\\f\")", " .replace(\"\\\"\", \"\\\\\\\"\");", " }", "", " private static void indent()", " {", " int n = _n_;", " while (n > 0)", " {", " buf_.append(\' \');", " n--;", " }", " }", "", " private static void backup()", " {", " int prev = buf_.length() - 1;", " if (prev >= 0 && buf_.charAt(prev) == ' ')", " buf_.setLength(prev);", " }", "", " private static void trim()", " {", " // Trim initial spaces", " int end = 0;", " int len = buf_.length();", " while (end < len && buf_.charAt(end) == ' ')", " end++; ", " buf_.delete(0, end);", " // Trim trailing spaces", " removeTrailingSpaces();", " }", "", " private static void removeTrailingSpaces()", " {", " int end = buf_.length();", " while (end > 0 && buf_.charAt(end-1) == ' ')", " end--;", " buf_.setLength(end);", " }", "", " private static void removeTrailingWhitespace()", " {", " int end = buf_.length();", " while (end > 0 && (buf_.charAt(end-1) == ' ' || buf_.charAt(end-1) == '\\n'))", " end--;", " buf_.setLength(end);", " }", "", " private static void onEmptyLine()", " {", " removeTrailingSpaces();", " int len = buf_.length();", " if (len > 0 && buf_.charAt(len-1) != '\\n') buf_.append(\"\\n\");", " indent();", " }", "", " private static int _n_ = 0;", " private static StringBuilder buf_ = new StringBuilder(INITIAL_BUFFER_SIZE);", "}" ] --An extremely simple renderer for terminals. prRender :: String prRender = unlines [ " //You may wish to change render", " private static void render(String s)", " {", " if (s.equals(\"{\"))", " {", " onEmptyLine();", " buf_.append(s);", " _n_ = _n_ + INDENT_WIDTH;", " buf_.append(\"\\n\");", " indent();", " }", " else if (s.equals(\"(\") || s.equals(\"[\"))", " buf_.append(s);", " else if (s.equals(\")\") || s.equals(\"]\"))", " {", " removeTrailingWhitespace();", " buf_.append(s);", " buf_.append(\" \");", " }", " else if (s.equals(\"}\"))", " {", " _n_ = _n_ - INDENT_WIDTH;", " onEmptyLine();", " buf_.append(s);", " buf_.append(\"\\n\");", " indent();", " }", " else if (s.equals(\",\"))", " {", " removeTrailingWhitespace();", " buf_.append(s);", " buf_.append(\" \");", " }", " else if (s.equals(\";\"))", " {", " removeTrailingWhitespace();", " buf_.append(s);", " buf_.append(\"\\n\");", " indent();", " }", " else if (s.equals(\"\")) return;", " else if (s.trim().equals(\"\"))", " {", " backup();", " buf_.append(s);", " }", " else", " {", " buf_.append(s);", " buf_.append(\" \");", " }", " }" ] prEntryPoints :: String -> CF -> String prEntryPoints packageAbsyn cf = msg ++ concatMap prEntryPoint (allCatsNorm cf) ++ msg2 where msg = " // print and show methods are defined for each category.\n" msg2 = " /*** You shouldn't need to change anything beyond this point. ***/\n" prEntryPoint cat = unlines [ " public static String print(" ++ packageAbsyn ++ "." ++ cat' ++ " foo)", " {", " pp(foo, 0);", " trim();", " String temp = buf_.toString();", " buf_.delete(0,buf_.length());", " return temp;", " }", " public static String show(" ++ packageAbsyn ++ "." ++ cat' ++ " foo)", " {", " sh(foo);", " String temp = buf_.toString();", " buf_.delete(0,buf_.length());", " return temp;", " }" ] where cat' = identCat cat prData :: String -> [UserDef] -> (Cat, [Rule]) -> String prData packageAbsyn user (cat, rules) | isList cat = unlines $ concat [ [ " private static void pp(" ++ packageAbsyn ++ "." ++ dat ++ " foo, int _i_)" , " {" , " pp" ++ dat ++ "(foo.iterator(), _i_);" , " }" , "" , " private static void pp" ++ dat ++ "(java.util.Iterator<" ++ et ++ "> it, int _i_)" , " {" ] , map (" " ++) $ prList dat et rules , [ " }" , "" ] ] | otherwise = unlines $ [ " private static void pp(" ++ packageAbsyn ++ "." ++ dat +++ "foo, int _i_)" , " {" , concat (addElse $ map (prRule packageAbsyn) rules) , " }" ] where dat = identCat (normCat cat) et = typename packageAbsyn user $ identCat $ normCatOfList cat addElse = map (" " ++) . intersperse "else " . filter (not . null) . map (dropWhile isSpace) prRule :: String -> Rule -> String prRule packageAbsyn r@(Rule f _c cats _) | not (isCoercion f || isDefinedRule f) = concat [ " if (foo instanceof" +++ packageAbsyn ++ "." ++ fun ++ ")\n" , " {\n" , " " ++ packageAbsyn ++ "." ++ fun +++ fnm +++ "= (" ++ packageAbsyn ++ "." ++ fun ++ ") foo;\n" , lparen , cats' , rparen , " }\n" ] where fun = funName f p = precRule r (lparen, rparen) = (" if (_i_ > " ++ show p ++ ") render(_L_PAREN);\n", " if (_i_ > " ++ show p ++ ") render(_R_PAREN);\n") cats' = case cats of [] -> "" _ -> concatMap (render . prItem (text fnm)) (numVars cats) fnm = '_' : map toLower fun prRule _nm _ = "" prList :: String -> String -> [Rule] -> [String] prList dat et rules = concat [ if null docs0 then [ "if (it.hasNext())" ] else [ "if (!it.hasNext())" , "{ /* nil */" , render $ nest 4 $ vcat docs0 , "}" , "else" ] , if null docs1 then [ "{ /* cons */" , " " ++ et ++ " el = it.next();" ] else [ "{" , " " ++ et ++ " el = it.next();" , " if (!it.hasNext())" , " { /* last */" , render $ nest 4 $ vcat docs1 , " }" , " else" , " { /* cons */" ] , unlessNull (swRules isConsFun) $ \ docs -> [ render $ nest (if null docs1 then 2 else 4) $ vcat docs ] , unless (null docs1) [ " }" ] , [ "}" ] ] where prules = sortRulesByPrecedence rules swRules f = switchByPrecedence "_i_" $ map (second $ sep . map text . prListRule_ dat) $ uniqOn fst $ filter f prules -- Discard duplicates, can only handle one rule per precedence. docs0 = swRules isNilFun docs1 = swRules isOneFun -- | Only render the rhs (items) of a list rule. prListRule_ :: IsFun a => String -> Rul a -> [String] prListRule_ dat (Rule _ _ items _) = for items $ \case Right t -> "render(\"" ++ escapeChars t ++ "\");" Left (TokenCat "String") -> "printQuoted(el);" Left (ListCat _) -> "pp" ++ dat ++ "(it, _i_);" Left _ -> "pp(el, _i_);" -- | -- >>> prItem "F" (Right "++") -- render("++"); -- -- >>> prItem "F" (Left (TokenCat "String", "string_")) -- printQuoted(F.string_); -- -- >>> prItem "F" (Left (Cat "Abc", "abc_")) -- pp(F.abc_, 0); -- prItem :: Doc -> Either (Cat, Doc) String -> Doc prItem _ (Right t) = nest 7 ("render(\"" <> text(escapeChars t) <> "\");\n") prItem fnm (Left (TokenCat "String", nt)) = nest 7 ("printQuoted(" <> fnm <> "." <> nt <> ");\n") prItem fnm (Left (cat, nt)) = nest 7 ("pp(" <> fnm <> "." <> nt <> ", " <> integer (precCat cat) <> ");\n") --The following methods generate the Show function. shData :: String -> [UserDef] -> (Cat, [Rule]) -> String shData packageAbsyn user (cat, rules) = unlines k where k = if isList cat then [ " private static void sh(" ++ packageAbsyn ++ "." ++ identCat (normCat cat) +++ "foo)" , " {" , shList packageAbsyn user cat rules ++ " }" ] else [ " private static void sh(" ++ packageAbsyn ++ "." ++ identCat (normCat cat) +++ "foo)" , " {" , concatMap (shRule packageAbsyn) rules ++ " }" ] shRule :: String -> Rule -> String shRule packageAbsyn (Rule f _c cats _) | not (isCoercion f || isDefinedRule f) = unlines [ " if (foo instanceof" +++ packageAbsyn ++ "." ++ fun ++ ")" , " {" , " " ++ packageAbsyn ++ "." ++ fun +++ fnm +++ "= (" ++ packageAbsyn ++ "." ++ fun ++ ") foo;" , members ++ " }" ] where fun = funName f members = concat [ lparen , " render(\"" ++ escapeChars fun ++ "\");\n" , cats' , rparen ] cats' = if allTerms cats then "" else concatMap (render . shCat (text fnm)) (lefts (numVars cats)) (lparen, rparen) = if allTerms cats then ("","") else (" render(\"(\");\n"," render(\")\");\n") allTerms [] = True allTerms ((Left {}):_) = False allTerms (_:zs) = allTerms zs fnm = '_' : map toLower fun shRule _nm _ = "" shList :: String -> [UserDef] -> Cat -> [Rule] -> String shList packageAbsyn user c _rules = unlines [ " for (java.util.Iterator<" ++ et ++ "> it = foo.iterator(); it.hasNext();)", " {", " sh(it.next());", " if (it.hasNext())", " render(\",\");", " }" ] where et = typename packageAbsyn user $ identCat $ normCatOfList c -- | -- >>> shCat "F" (ListCat (Cat "A"), "lista_") -- render("["); -- sh(F.lista_); -- render("]"); -- -- >>> shCat "F" (Cat "A", "a_") -- sh(F.a_); -- shCat :: Doc -> (Cat, Doc) -> Doc shCat fnm (ListCat _, vname) = vcat [ " render(\"[\");" , " sh(" <> fnm <> "." <> vname <> ");" , " render(\"]\");\n" ] shCat fname (_, vname) = " sh(" <> fname <> "." <> vname <> ");\n" --Helper function that escapes characters in strings escapeChars :: String -> String escapeChars [] = [] escapeChars ('\\':xs) = '\\' : '\\' : escapeChars xs escapeChars ('\"':xs) = '\\' : '\"' : escapeChars xs escapeChars (x:xs) = x : escapeChars xs BNFC-2.9.5/src/BNFC/Backend/Java/CFtoVisitSkel15.hs0000644000000000000000000001074207346545000017336 0ustar0000000000000000{- BNF Converter: Java Vistor skeleton generator Copyright (C) 2004 Author: Michael Pellauer, Bjorn Bringert Description : This module generates a Skeleton function which uses the Visitor Design Pattern, which users may find more familiar than Appel's method. Author : Michael Pellauer Bjorn Bringert Created : 4 August, 2003 Modified : 16 June, 2004 -} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} module BNFC.Backend.Java.CFtoVisitSkel15 (cf2VisitSkel) where import Data.Bifunctor ( second ) import Data.Either ( lefts ) import Text.PrettyPrint import qualified Text.PrettyPrint as P import BNFC.CF import BNFC.Utils ( (+++) ) import BNFC.Backend.Common.NamedVariables import BNFC.Backend.Java.CFtoJavaAbs15 ( typename ) --Produces a Skeleton using the Visitor Design Pattern. --Thus the user can choose which Skeleton to use. cf2VisitSkel :: String -> String -> CF -> String cf2VisitSkel packageBase packageAbsyn cf = concat [ header, concatMap (prData packageAbsyn user) groups, "}"] where user = fst $ unzip $ tokenPragmas cf groups = fixCoercions $ ruleGroupsInternals cf header = unlines [ "package" +++ packageBase ++ ";", "", "/*** Visitor Design Pattern Skeleton. ***/", "", "/* This implements the common visitor design pattern.", " Tests show it to be slightly less efficient than the", " instanceof method, but easier to use. ", " Replace the R and A parameters with the desired return", " and context types.*/", "", "public class VisitSkel", "{" ] --Traverses a category based on its type. prData :: String -> [UserDef] -> (Cat, [Rule]) -> String prData packageAbsyn user (cat, rules) | isList cat = "" | otherwise = unlines [" public class " ++ identCat cat ++ "Visitor implements " ++ qual (identCat cat) ++ ".Visitor" , " {" , render $ vcat $ map (nest 4 . prRule packageAbsyn user) rules , " }" ] where qual x = packageAbsyn ++ "." ++ x -- | traverses a standard rule. -- >>> prRule "ABSYN" [] $ Rule "EInt" undefined [Left (TokenCat "Integer"), Left (Cat "NT")] Parsable -- public R visit(ABSYN.EInt p, A arg) -- { /* Code for EInt goes here */ -- //p.integer_; -- p.nt_.accept(new NTVisitor(), arg); -- return null; -- } -- -- It skips the internal category (indicating that a rule is not parsable) -- >>> prRule "ABSYN" [] $ Rule "EInt" undefined [Left (TokenCat "Integer")] Internal -- public R visit(ABSYN.EInt p, A arg) -- { /* Code for EInt goes here */ -- //p.integer_; -- return null; -- } prRule :: IsFun f => String -> [UserDef] -> Rul f -> Doc prRule packageAbsyn user (Rule fun _ cats _) | not (isCoercion fun || isDefinedRule fun) = vcat [ "public R visit(" P.<> text packageAbsyn P.<> "." P.<> fname P.<> " p, A arg)" , "{" , nest 2 $ vcat [ "/* Code for " P.<> fname P.<> " goes here */" , vcat $ map (prCat packageAbsyn user) cats' , "return null;" ] , "}" ] where fname = text $ funName fun -- function name cats' = map (second ("p." P.<>)) $ lefts $ numVars cats -- non-terminals in the rhs prRule _ _ _ = empty -- | Traverses a class's instance variables. -- -- >>> prCat "ABSYN" [] (Cat "A", "p.a_") -- p.a_.accept(new AVisitor(), arg); -- -- >>> prCat "" [] (TokenCat "Integer", "p.integer_") -- //p.integer_; -- -- >>> prCat "" ["A"] (TokenCat "A", "p.a_") -- //p.a_; -- -- >>> prCat "" ["A"] (TokenCat "A", "p.a_2") -- //p.a_2; -- -- >>> prCat "ABSYN" [] (ListCat (Cat "A"), "p.lista_") -- for (ABSYN.A x: p.lista_) { -- x.accept(new AVisitor(), arg); -- } prCat :: String -- ^ absyn package name. -> [UserDef] -- ^ User defined tokens. -> (Cat, Doc) -- ^ Variable category and name. -> Doc -- ^ Code for visiting the variable. prCat packageAbsyn user (cat, var) = case cat of TokenCat{} -> "//" P.<> var P.<> ";" ListCat cat' -> vcat [ "for" <+> parens (text et <+> "x:" <+> var) <+> "{" , nest 2 $ prCat packageAbsyn user (cat', "x") , "}" ] _ -> var P.<> ".accept(new " P.<> text varType P.<> "Visitor(), arg);" where varType = typename "" user $ identCat $ normCat cat -- no qualification here! et = typename packageAbsyn user $ identCat $ normCatOfList cat BNFC-2.9.5/src/BNFC/Backend/Java/RegToAntlrLexer.hs0000644000000000000000000000602207346545000017514 0ustar0000000000000000module BNFC.Backend.Java.RegToAntlrLexer (printRegJLex, escapeCharInSingleQuotes) where -- modified from RegToJLex.hs import Data.Char (ord) import Numeric (showHex) import BNFC.Abs -- the top-level printing method printRegJLex :: Reg -> String printRegJLex = render . prt 0 -- you may want to change render and parenth render :: [String] -> String render = rend (0 :: Int) where rend i ss = case ss of "[" :ts -> cons "[" $ rend i ts "(" :ts -> cons "(" $ rend i ts t : "," :ts -> cons t $ space "," $ rend i ts t : ")" :ts -> cons t $ cons ")" $ rend i ts t : "]" :ts -> cons t $ cons "]" $ rend i ts t :ts -> space t $ rend i ts _ -> "" cons s t = s ++ t space t s = if null s then t else t ++ s parenth :: [String] -> [String] parenth ss = ["("] ++ ss ++ [")"] -- the printer class does the job class Print a where prt :: Int -> a -> [String] -- | Print char according to ANTLR regex format. escapeChar :: [Char] -> Char -> String escapeChar reserved x | x `elem` reserved = '\\' : [x] | i >= 65536 = "\\u{" ++ h ++ "}" | i >= 256 || i < 32 = "\\u" ++ replicate (4 - length h) '0' ++ h | otherwise = [x] -- issue #329, don't escape in the usual way! where i = ord x h = showHex i "" -- | Escape character for use inside single quotes. escapeCharInSingleQuotes :: Char -> String escapeCharInSingleQuotes = escapeChar ['\'','\\'] -- The ANTLR definition of what can be in a [char set] is here: -- https://github.com/antlr/antlr4/blob/master/doc/lexer-rules.md#lexer-rule-elements -- > The following escaped characters are interpreted as single special characters: -- > \n, \r, \b, \t, \f, \uXXXX, and \u{XXXXXX}. -- > To get ], \, or - you must escape them with \. -- | Escape character for use inside @[char set]@. escapeInCharSet :: Char -> String escapeInCharSet = escapeChar [ ']', '\\', '-' ] prPrec :: Int -> Int -> [String] -> [String] prPrec i j = if j prPrec i 2 (concat [prt 2 reg0 , [" "], prt 3 reg]) RAlt reg0 reg -> prPrec i 1 (concat [prt 1 reg0 , ["|"] , prt 2 reg]) RMinus reg0 REps -> prt i reg0 -- REps is identity for set difference RMinus RAny (RChar c) -> ["~'", escapeCharInSingleQuotes c, "'"] RMinus RAny (RAlts str) -> concat [["~["], map escapeInCharSet str ,["]"]] RMinus _ _ -> error "Antlr does not support general set difference" RStar reg -> concat [prt 3 reg , ["*"]] RPlus reg -> concat [prt 3 reg , ["+"]] ROpt reg -> concat [prt 3 reg , ["?"]] REps -> [""] RChar c -> ["'", escapeCharInSingleQuotes c, "'"] RAlts str -> concat [ ["["], map escapeInCharSet str, ["]"] ] RSeqs str -> prPrec i 2 $ map show str RDigit -> ["DIGIT"] RLetter -> ["LETTER"] RUpper -> ["CAPITAL"] RLower -> ["SMALL"] RAny -> ["[\\u0000-\\u00FF]"] BNFC-2.9.5/src/BNFC/Backend/Java/RegToJLex.hs0000644000000000000000000000606507346545000016305 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} module BNFC.Backend.Java.RegToJLex (printRegJLex, escapeChar) where import Data.Char (ord, showLitChar) import BNFC.Abs (Identifier(..), Reg(..)) import BNFC.Options (JavaLexerParser(..)) import BNFC.Backend.Common (flexEps) -- | Print a regular expression for the Java lexers. printRegJLex :: JavaLexerParser -> Reg -> String printRegJLex lexer reg = prt lexer 0 reg "" class Print a where prt :: JavaLexerParser -> Int -> a -> ShowS prtList :: JavaLexerParser -> [a] -> ShowS prtList lexer xs s = foldr (prt lexer 0) s xs -- OR: prtList lexer = foldr (.) id . map (prt lexer 0) instance Print a => Print [a] where prt lexer _ = prtList lexer instance Print Char where prt lexer _ c = showString $ escapeChar lexer c escapeChar :: JavaLexerParser -> Char -> String escapeChar _ '^' = "\\x5E" -- special case, since \^ is a control character escape escapeChar JFlexCup x | x `elem` jflexReserved = '\\' : [x] escapeChar _ x | x `elem` jlexReserved = '\\' : [x] | ord x >= 255 = [x] | otherwise = showLitChar x "" -- Characters that must be escaped in JLex regular expressions jlexReserved :: [Char] jlexReserved = ['?','*','+','|','(',')','^','$','.','[',']','{','}','"','\\'] jflexReserved :: [Char] jflexReserved = '~':'!':'/':[] -- plus the @jlexReserved@, but they are tested separately instance Print Identifier where prt _ _ (Identifier (_, x)) = showString x instance Print Reg where prt lexer i = \case RSeq reg1 reg2 -> showParen (i > 2) $ prt lexer 2 reg1 . prt lexer 3 reg2 RAlt reg1 reg2 -> showParen (i > 1) $ prt lexer 1 reg1 . showChar '|' . prt lexer 2 reg2 -- JLex does not support set difference in general RMinus reg0 REps -> prt lexer i reg0 -- REps is identity for set difference RMinus RAny reg@RChar{} -> showParen (i > 3) $ showString "[^" . prt lexer 0 reg . showString "]" RMinus RAny (RAlts str) -> showParen (i > 3) $ showString "[^" . prt lexer 0 str . showString "]" -- FIXME: maybe we could add cases for char - RDigit, RLetter etc. RMinus _ _ -> error $ "J[F]Lex does not support general set difference" RStar reg -> showParen (i > 3) $ prt lexer 3 reg . showChar '*' RPlus reg -> showParen (i > 3) $ prt lexer 3 reg . showChar '+' ROpt reg -> showParen (i > 3) $ prt lexer 3 reg . showChar '?' REps -> showParen (i > 3) $ showString flexEps RChar c -> showParen (i > 3) $ prt lexer 0 c RAlts str -> showParen (i > 3) $ showChar '[' . prt lexer 0 str . showChar ']' RSeqs str -> showParen (i > 2) $ prt lexer 0 str RDigit -> showParen (i > 3) $ showString "{DIGIT}" RLetter -> showParen (i > 3) $ showString "{LETTER}" RUpper -> showParen (i > 3) $ showString "{CAPITAL}" RLower -> showParen (i > 3) $ showString "{SMALL}" RAny -> showParen (i > 3) $ showChar '.' BNFC-2.9.5/src/BNFC/Backend/Java/Utils.hs0000644000000000000000000000315207346545000015574 0ustar0000000000000000module BNFC.Backend.Java.Utils where import BNFC.CF import BNFC.Utils ( mkName, NameStyle(..)) import BNFC.Backend.Common.NamedVariables -- | Make a Java line comment comment :: String -> String comment = ("// " ++) javaReserved :: [String] javaReserved = [ "abstract" , "assert" , "boolean" , "break" , "byte" , "case" , "catch" , "char" , "class" , "const" , "continue" , "default" , "do" , "double" , "else" , "enum" , "extends" , "false" -- there for Java/ANTLR backend , "final" , "finally" , "float" , "for" , "goto" , "if" , "implements" , "import" , "instanceof" , "int" , "interface" , "long" , "native" , "new" , "null" -- there for Java/ANTLR backend -- , "Object" , "package" , "private" , "protected" , "public" , "return" , "short" , "static" , "strictfp" , "super" , "switch" , "synchronized" , "true" -- there for Java/ANTLR backend , "this" , "throw" , "throws" , "transient" , "try" , "void" , "volatile" , "while" ] -- | Append an underscore if there is a clash with a java or ANTLR keyword. -- E.g. "Grammar" clashes with ANTLR keyword "grammar" since -- we sometimes need the upper and sometimes the lower case version -- of "Grammar" in the generated parser. getRuleName :: String -> String getRuleName z | firstLowerCase z `elem` ("grammar" : javaReserved) = z ++ "_" | otherwise = z getLabelName :: Fun -> String getLabelName = mkName ["Rule"] CamelCase -- | Make a new entrypoint NT for an existing NT. startSymbol :: String -> String startSymbol = ("Start_" ++) BNFC-2.9.5/src/BNFC/Backend/Latex.hs0000644000000000000000000002460707346545000014700 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {- BNF Converter: Latex Generator Copyright (C) 2004 Author: Markus Forsberg, Aarne Ranta -} module BNFC.Backend.Latex where import qualified Data.List as List import System.FilePath ((<.>),replaceExtension) import Text.Printf import BNFC.Abs (Reg (..)) import BNFC.Options hiding (Backend) import BNFC.Backend.Base import BNFC.Backend.Common.Makefile as Makefile import BNFC.CF import BNFC.Utils import BNFC.PrettyPrint hiding (empty) -- | Entry point: create .tex file and a Makefile to compile it. makeLatex :: SharedOptions -> CF -> Backend makeLatex opts cf = do let texfile = name <.> "tex" mkfile texfile comment (cfToLatex name cf) Makefile.mkMakefile (optMake opts) (makefile texfile) where name = lang opts -- | Create a makefile for the given tex file -- -- >>> makefile "myFile.tex" "Makefile" -- all : myFile.pdf -- -- myFile.pdf : myFile.tex -- pdflatex myFile.tex -- -- clean : -- -rm myFile.pdf myFile.aux myFile.log -- -- cleanall : clean -- -rm Makefile myFile.tex -- -- makefile :: String -> String -> Doc makefile texfile basename = vcat [ Makefile.mkRule "all" [pdffile] [] , Makefile.mkRule pdffile [texfile] [ printf "pdflatex %s" texfile ] , Makefile.mkRule "clean" [] [ unwords [ "-rm", pdffile, auxfile, logfile ]] , Makefile.mkRule "cleanall" ["clean"] [ unwords [ "-rm", basename, texfile ]] ] where pdffile = replaceExtension texfile "pdf" auxfile = replaceExtension texfile "aux" logfile = replaceExtension texfile "log" comment :: String -> String comment = ("%% " ++) -- | Create content of .tex file. cfToLatex :: String -> CF -> String cfToLatex name cf = unlines -- Overall structure of created LaTeX document: [ "\\batchmode" , beginDocument name , macros , introduction , prtTerminals name cf , prtBNF name cf , endDocument ] introduction :: String introduction = unlines [ "This document was automatically generated by the {\\em BNF-Converter}." , "It was generated together with the lexer, the parser, and the" , "abstract syntax module, which guarantees that the document" , "matches with the implementation of the language" , "(provided no hand-hacking has taken place)." ] prtTerminals :: String -> CF -> String prtTerminals name cf = unlines $ [ "\\section*{The lexical structure of " ++ name ++ "}" , "" ] ++ identSection cf ++ [ "\\subsection*{Literals}" , prtLiterals name cf ] ++ map prtOwnToken (tokenPragmas cf) ++ [ "\\subsection*{Reserved words and symbols}" , prtReserved name cf , prtSymb name cf , "\\subsection*{Comments}" , prtComments $ comments cf ] identSection :: CF -> [String] identSection cf | hasIdent cf = [ "\\subsection*{Identifiers}" ] ++ prtIdentifiers | otherwise = [] prtIdentifiers :: [String] prtIdentifiers = [ "Identifiers \\nonterminal{Ident} are unquoted strings beginning with a letter," , "followed by any combination of letters, digits, and the characters {\\tt \\_ '}," , "reserved words excluded." ] prtLiterals :: String -> CF -> String prtLiterals _ cf = unlines . concat . List.intersperse [""] . map stringLit . filter (/= catIdent) $ literals cf stringLit :: TokenCat -> [String] stringLit = \case "Char" -> [ "Character literals \\nonterminal{Char}\\ have the form" , "\\terminal{'}$c$\\terminal{'}, where $c$ is any single character." ] "String" -> [ "String literals \\nonterminal{String}\\ have the form" , "\\terminal{\"}$x$\\terminal{\"}, where $x$ is any sequence of any characters" , "except \\terminal{\"}\\ unless preceded by \\verb6\\6." ] "Integer" -> [ "Integer literals \\nonterminal{Int}\\ are nonempty sequences of digits." ] "Double" -> [ "Double-precision float literals \\nonterminal{Double}\\ have the structure" , "indicated by the regular expression" +++ "$\\nonterminal{digit}+ \\mbox{{\\it `.'}} \\nonterminal{digit}+ (\\mbox{{\\it `e'}} \\mbox{{\\it `-'}}? \\nonterminal{digit}+)?$ i.e.\\" , "two sequences of digits separated by a decimal point, optionally" , "followed by an unsigned or negative exponent." ] _ -> [] prtOwnToken :: (String, Reg) -> String prtOwnToken (name,reg) = unlines [ name +++ "literals are recognized by the regular expression", "\\(" ++ latexRegExp reg ++ "\\)" ] prtComments :: ([(String,String)],[String]) -> String prtComments (xs,ys) = (if null ys then "There are no single-line comments in the grammar. \\\\" else "Single-line comments begin with " ++ sing ++". \\\\") ++ (if null xs then "There are no multiple-line comments in the grammar." else "Multiple-line comments are enclosed with " ++ mult ++".") where sing = List.intercalate ", " $ map (symbol.prt) ys mult = List.intercalate ", " $ map (\(x,y) -> symbol (prt x) ++ " and " ++ symbol (prt y)) xs prtSymb :: String -> CF -> String prtSymb name cf = case cfgSymbols cf of [] -> "\nThere are no symbols in " ++ name ++ ".\\\\\n" xs -> "The symbols used in " ++ name ++ " are the following: \\\\\n" ++ tabular 3 (three $ map (symbol.prt) xs) prtReserved :: String -> CF -> String prtReserved name cf = case reservedWords cf of [] -> stringRes name ++ "\nThere are no reserved words in " ++ name ++ ".\\\\\n" xs -> stringRes name ++ tabular 3 (three $ map (reserved.prt) xs) stringRes :: String -> String stringRes name = concat ["The set of reserved words is the set of terminals ", "appearing in the grammar. Those reserved words ", "that consist of non-letter characters are called symbols, and ", "they are treated in a different way from those that ", "are similar to identifiers. The lexer ", "follows rules familiar from languages ", "like Haskell, C, and Java, including longest match ", "and spacing conventions.", "\n\n", "The reserved words used in " ++ name ++ " are the following: \\\\\n"] -- | Group a list into blocks of 3 elements. three :: Monoid a => [a] -> [[a]] three [] = [] three [x] = [[x,mempty,mempty]] three [x,y] = [[x,y,mempty]] three (x:y:z:xs) = [x,y,z] : three xs prtBNF :: String -> CF -> String prtBNF name cf = unlines [ "\\section*{The syntactic structure of " ++ name ++ "}" , "" , "Non-terminals are enclosed between $\\langle$ and $\\rangle$." , "The symbols " ++ arrow ++ " (production), " ++ delimiter ++ " (union)" , "and " ++ empty ++ " (empty rule) belong to the BNF notation." , "All other symbols are terminals.\\\\" , prtRules (ruleGroups cf) ] prtRules :: [(Cat,[Rule])] -> String prtRules [] = [] prtRules ((c,[]):xs) = tabular 3 [[nonterminal c,arrow,[]]] ++ prtRules xs prtRules ((c, r : rs) : xs) = tabular 3 ([nonterminal c,arrow,prtSymbols $ rhsRule r] : [[[],delimiter,prtSymbols (rhsRule y)] | y <- rs]) ++ prtRules xs prtSymbols :: [Either Cat String] -> String prtSymbols [] = empty prtSymbols xs = foldr ((+++) . p) [] xs where p (Left r) = nonterminal r --- (prt r) p (Right r) = terminal (prt r) prt :: String -> String prt = concatMap escape where escape '\\' = "$\\backslash$" escape '~' = "\\~{}" escape '^' = "{\\textasciicircum}" escape c | c `elem` ("$&%#_{}" :: String) = ['\\', c] escape c | c `elem` ("+=|<>-" :: String) = "{$" ++ [c] ++ "$}" escape c = [c] macros :: String macros = unlines [ "\\newcommand{\\emptyP}{\\mbox{$\\epsilon$}}" , "\\newcommand{\\terminal}[1]{\\mbox{{\\texttt {#1}}}}" , "\\newcommand{\\nonterminal}[1]{\\mbox{$\\langle \\mbox{{\\sl #1 }} \\! \\rangle$}}" , "\\newcommand{\\arrow}{\\mbox{::=}}" , "\\newcommand{\\delimit}{\\mbox{$|$}}" , "\\newcommand{\\reserved}[1]{\\mbox{{\\texttt {#1}}}}" , "\\newcommand{\\literal}[1]{\\mbox{{\\texttt {#1}}}}" , "\\newcommand{\\symb}[1]{\\mbox{{\\texttt {#1}}}}" ] reserved :: String -> String reserved s = "{\\reserved{" ++ s ++ "}}" literal :: String -> String literal s = "{\\literal{" ++ s ++ "}}" empty :: String empty = "{\\emptyP}" symbol :: String -> String symbol s = "{\\symb{" ++ s ++ "}}" tabular :: Int -> [[String]] -> String tabular n xs = "\n\\begin{tabular}{" ++ concat (replicate n "l") ++ "}\n" ++ concatMap (\(a:as) -> foldr (+++) "\\\\\n" (a: map ('&':) as)) xs ++ "\\end{tabular}\\\\\n" terminal :: String -> String terminal s = "{\\terminal{" ++ s ++ "}}" nonterminal :: Cat -> String nonterminal s = "{\\nonterminal{" ++ mkId (identCat s) ++ "}}" where mkId = map mk mk c = case c of '_' -> '-' --- _ -> c arrow :: String arrow = " {\\arrow} " delimiter :: String delimiter = " {\\delimit} " beginDocument :: String -> String beginDocument name = unlines [ "" , "\\documentclass[a4paper,11pt]{article}" , "\\usepackage[T1]{fontenc}" , "\\usepackage[utf8x]{inputenc}" , "\\setlength{\\parindent}{0mm}" , "\\setlength{\\parskip}{1mm}" , "" , "\\title{The Language " ++ name ++ "}" , "\\author{BNF-converter}" , "" , "\\begin{document}" , "\\maketitle" , "" ] endDocument :: String endDocument = unlines [ "" , "\\end{document}" ] latexRegExp :: Reg -> String latexRegExp = rex 0 where rex :: Int -> Reg -> String rex i = \case RSeq r0 r -> ifPar i 2 $ rex 2 r0 +++ rex 2 r RAlt r0 r -> ifPar i 1 $ rex 1 r0 +++ "\\mid" +++ rex 1 r RMinus r0 r -> ifPar i 1 $ rex 2 r0 +++ "-" +++ rex 2 r RStar r -> rex 3 r ++ "*" RPlus r -> rex 3 r ++ "+" ROpt r -> rex 3 r ++ "?" REps -> "\\epsilon" RChar c -> "\\mbox{`" ++ prt [c] ++ "'}" RAlts s -> "[" ++ "\\mbox{``" ++ prt s ++ "''}" ++ "]" RSeqs s -> "\\{" ++ "\\mbox{``" ++ prt s ++ "''}" ++ "\\}" RDigit -> "{\\nonterminal{digit}}" RLetter -> "{\\nonterminal{letter}}" RUpper -> "{\\nonterminal{upper}}" RLower -> "{\\nonterminal{lower}}" RAny -> "{\\nonterminal{anychar}}" ifPar i j s = if i > j then "(" ++ s ++ ")" else s BNFC-2.9.5/src/BNFC/Backend/OCaml.hs0000644000000000000000000001353507346545000014614 0ustar0000000000000000{- BNF Converter: OCaml main file Copyright (C) 2005 Author: Kristofer Johannisson -} -- based on BNFC Haskell backend module BNFC.Backend.OCaml (makeOCaml) where import System.FilePath (pathSeparator, ()) import BNFC.Backend.Base (MkFiles, mkfile) import BNFC.Backend.Common.Makefile import BNFC.Backend.OCaml.CFtoOCamlAbs import BNFC.Backend.OCaml.CFtoOCamlLex import BNFC.Backend.OCaml.CFtoOCamlPrinter import BNFC.Backend.OCaml.CFtoOCamlShow import BNFC.Backend.OCaml.CFtoOCamlTemplate import BNFC.Backend.OCaml.CFtoOCamlTest (ocamlTestfile) import BNFC.Backend.OCaml.CFtoOCamlYacc import BNFC.Backend.OCaml.OCamlUtil import BNFC.Backend.XML (makeXML) import BNFC.CF import BNFC.Options import BNFC.PrettyPrint import BNFC.Utils import qualified BNFC.Backend.C as C -- naming conventions noLang :: SharedOptions -> String -> String noLang _ name = name withLang :: SharedOptions -> String -> String withLang opts name = name ++ sanitizedLang opts mkMod :: (SharedOptions -> String -> String) -> String -> SharedOptions -> String mkMod addLang name opts = pref ++ if inDir opts then sanitizedLang opts ++ "." ++ name else addLang opts name where pref = maybe "" (++".") (inPackage opts) mkFile :: (SharedOptions -> String -> String) -> String -> String -> SharedOptions -> FilePath mkFile addLang name ext opts = pref ++ if inDir opts then sanitizedLang opts name ++ ext' else addLang opts name ++ if null ext then "" else ext' where pref = maybe "" (\ p -> pkgToDir p "") (inPackage opts) ext' = if null ext then "" else "." ++ ext -- | Turn language name into a valid ocaml module identifier. sanitizedLang :: SharedOptions -> String sanitizedLang = camelCase_ . lang absFile, absFileM, ocamllexFile, ocamllexFileM, ocamlyaccFile, ocamlyaccFileM, utilFile, templateFile, templateFileM, printerFile, printerFileM, showFile, showFileM, tFile :: SharedOptions -> String absFile = mkFile withLang "Abs" "ml" absFileM = mkMod withLang "Abs" ocamllexFile = mkFile withLang "Lex" "mll" ocamllexFileM = mkMod withLang "Lex" ocamlyaccFile = mkFile withLang "Par" "mly" ocamlyaccFileM = mkMod withLang "Par" templateFile = mkFile withLang "Skel" "ml" templateFileM = mkMod withLang "Skel" printerFile = mkFile withLang "Print" "ml" printerFileM = mkMod withLang "Print" showFile = mkFile withLang "Show" "ml" showFileM = mkMod withLang "Show" tFile = mkFile withLang "Test" "ml" utilFile = mkFile noLang "BNFC_Util" "ml" makeOCaml :: SharedOptions -> CF -> MkFiles () makeOCaml opts cf = do let absMod = absFileM opts lexMod = ocamllexFileM opts parMod = ocamlyaccFileM opts prMod = printerFileM opts showMod = showFileM opts do mkfile (absFile opts) comment $ cf2Abstract absMod cf mkfile (ocamllexFile opts) comment $ cf2ocamllex lexMod parMod cf mkfile (ocamlyaccFile opts) C.comment $ cf2ocamlyacc (ocamlParser opts) absMod cf mkfile (templateFile opts) comment $ cf2Template (templateFileM opts) absMod cf mkfile (printerFile opts) comment $ cf2Printer prMod absMod cf mkfile (showFile opts) comment $ cf2show showMod absMod cf mkfile (tFile opts) comment $ ocamlTestfile (ocamlParser opts) absMod lexMod parMod prMod showMod cf mkfile (utilFile opts) comment $ utilM mkMakefile (optMake opts) $ makefile opts case xml opts of 2 -> makeXML opts True cf 1 -> makeXML opts False cf _ -> return () comment :: String -> String comment x = unwords [ "(*", x, "*)" ] pkgToDir :: String -> FilePath pkgToDir = replace '.' pathSeparator codeDir :: SharedOptions -> FilePath codeDir opts = let pref = maybe "" pkgToDir (inPackage opts) dir = if inDir opts then sanitizedLang opts else "" sep = if null pref || null dir then "" else [pathSeparator] in pref ++ sep ++ dir makefile :: SharedOptions -> String -> Doc makefile opts basename = vcat [ mkVar "OCAMLC" "ocamlc" , mkVar "OCAMLYACC" $ ocamlParserName opts , mkVar "OCAMLLEX" "ocamllex" , mkVar "OCAMLCFLAGS" "" , mkRule "all" [] [ "$(OCAMLYACC) " ++ ocamlyaccFile opts , "$(OCAMLLEX) " ++ ocamllexFile opts , "$(OCAMLC) $(OCAMLCFLAGS) -o " ++ mkFile withLang "Test" "" opts +++ utilFile opts +++ absFile opts +++ templateFile opts +++ showFile opts +++ printerFile opts +++ mkFile withLang "Par" "mli" opts +++ mkFile withLang "Par" "ml" opts +++ mkFile withLang "Lex" "ml" opts +++ tFile opts ] , mkRule "clean" [] [ "-rm -f " ++ unwords (map (dir++) [ "*.cmi", "*.cmo", "*.o" ]) ] , mkRule "distclean" ["clean"] [ "-rm -f " ++ unwords [ mkFile withLang "Lex" "*" opts, mkFile withLang "Par" "*" opts, mkFile withLang "Layout" "*" opts, mkFile withLang "Skel" "*" opts, mkFile withLang "Print" "*" opts, mkFile withLang "Show" "*" opts, mkFile withLang "Test" "*" opts, mkFile withLang "Abs" "*" opts, mkFile withLang "Test" "" opts, utilFile opts, basename ]] ] where dir = let d = codeDir opts in if null d then "" else d ++ [pathSeparator] utilM :: String utilM = unlines ["open Lexing", "", "(* this should really be in the parser, but ocamlyacc won't put it in the .mli *)", "exception Parse_error of Lexing.position * Lexing.position" ] BNFC-2.9.5/src/BNFC/Backend/OCaml/0000755000000000000000000000000007346545000014251 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Backend/OCaml/CFtoOCamlAbs.hs0000644000000000000000000000525507346545000017011 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {- BNF Converter: OCaml Abstract Syntax Generator Copyright (C) 2005 Author: Kristofer Johannisson -} -- based on BNFC Haskell backend module BNFC.Backend.OCaml.CFtoOCamlAbs (cf2Abstract) where import Text.PrettyPrint import BNFC.CF import BNFC.Utils ( (+++), unless, parensIf ) import Data.List ( intersperse ) import BNFC.Backend.OCaml.OCamlUtil -- to produce an OCaml module cf2Abstract :: String -> CF -> String cf2Abstract _ cf = unlines $ concat [ mutualRecDefs $ concat [ map (prSpecialData cf) (specialCats cf) , map prData (cf2data cf) ] , unless (null defs) $ concat [ [ "(* defined constructors *)" , "" ] , defs ] ] where defs = definedRules cf definedRules :: CF -> [String] definedRules cf = map mkDef $ definitions cf where mkDef (Define f args e _) = "let " ++ sanitizeOcaml (funName f) ++ " " ++ mkTuple (map (sanitizeOcaml . fst) args) ++ " = " ++ ocamlExp False e ocamlExp :: Bool -> Exp -> String ocamlExp p = \case Var s -> sanitizeOcaml s App "(:)" _ [e1, e2] -> parensIf p $ unwords [ ocamlExp True e1, "::", ocamlExp False e2 ] App s _ [] -> sanitizeOcaml s App s _ [e] -> parensIf p $ sanitizeOcaml s ++ ' ' : ocamlExp True e App s _ es -> parensIf p $ sanitizeOcaml s ++ ' ' : mkTuple (map (ocamlExp False) es) LitInt i -> show i LitDouble d -> show d LitChar c -> "\'" ++ c : "\'" LitString s -> "\"" ++ s ++ "\"" -- allow mutual recursion so that we do not have to sort the type definitions in -- dependency order mutualRecDefs :: [String] -> [String] mutualRecDefs ss = case ss of [] -> [] [x] -> ["type" +++ x] x:xs -> ("type" +++ x) : map ("and" +++) xs prData :: Data -> String prData (cat,rules) = fixType cat +++ "=\n " ++ concat (intersperse "\n | " (map prRule rules)) ++ "\n" prRule :: (String, [Cat]) -> String prRule (fun, []) = fun prRule (fun, cats) = fun +++ "of" +++ render (mkTupleType cats) -- | Creates an OCaml type tuple by intercalating * between type names -- >>> mkTupleType [Cat "A"] -- a -- -- >>> mkTupleType [Cat "A", Cat "Abc", Cat "S"] -- a * abc * s mkTupleType :: [Cat] -> Doc mkTupleType = hsep . intersperse (char '*') . map (text . fixType) prSpecialData :: CF -> TokenCat -> String prSpecialData cf cat = fixType (TokenCat cat) +++ "=" +++ cat +++ "of" +++ contentSpec cf cat -- unwords ["newtype",cat,"=",cat,contentSpec cf cat,"deriving (Eq,Ord,Show)"] contentSpec :: CF -> TokenCat -> String contentSpec cf cat = -- if isPositionCat cf cat then "((Int,Int),String)" else "String" if isPositionCat cf cat then "((int * int) * string)" else "string" BNFC-2.9.5/src/BNFC/Backend/OCaml/CFtoOCamlLex.hs0000644000000000000000000002500607346545000017030 0ustar0000000000000000{-# LANGUAGE OverloadedStrings #-} {- BNF Converter: ocamllex Generator Copyright (C) 2005 Author: Kristofer Johannisson -} -- based on BNFC Haskell backend module BNFC.Backend.OCaml.CFtoOCamlLex (cf2ocamllex) where import Prelude hiding ((<>)) import Data.Char (ord) import qualified Data.List as List import Text.PrettyPrint hiding (render) import qualified Text.PrettyPrint as PP import BNFC.Abs import BNFC.CF import BNFC.Backend.Common (asciiKeywords, unicodeAndSymbols) import BNFC.Backend.OCaml.CFtoOCamlYacc (terminal) import BNFC.Backend.OCaml.OCamlUtil (mkEsc, ocamlTokenName) import BNFC.Lexing (mkRegMultilineComment) import BNFC.Utils (cstring, unless) cf2ocamllex :: String -> String -> CF -> String cf2ocamllex _ parserMod cf = unlines $ List.intercalate [""] [ header parserMod cf , cMacros , rMacros cf , uMacros cf , [ PP.render $ rules cf ] ] header :: String -> CF -> [String] header parserMod cf = List.intercalate [""] . filter (not . null) $ concat [ [ [ "(* Lexer definition for ocamllex. *)" , "" , "(* preamble *)" , "{" , "open " ++ parserMod , "open Lexing" ] ] , hashtables cf , [ [ "let unescapeInitTail (s:string) : string =" , " let rec unesc s = match s with" , " '\\\\'::c::cs when List.mem c ['\\\"'; '\\\\'; '\\\''] -> c :: unesc cs" , " | '\\\\'::'n'::cs -> '\\n' :: unesc cs" , " | '\\\\'::'t'::cs -> '\\t' :: unesc cs" , " | '\\\\'::'r'::cs -> '\\r' :: unesc cs" -- " | '\\\\'::'f'::cs -> '\\f' :: unesc cs", -- \f not supported by ocaml , " | '\\\"'::[] -> []" , " | '\\\''::[] -> []" , " | c::cs -> c :: unesc cs" , " | _ -> []" , " (* explode/implode from caml FAQ *)" , " in let explode (s : string) : char list =" , " let rec exp i l =" , " if i < 0 then l else exp (i - 1) (s.[i] :: l) in" , " exp (String.length s - 1) []" , " in let implode (l : char list) : string =" , " let res = Buffer.create (List.length l) in" , " List.iter (Buffer.add_char res) l;" , " Buffer.contents res" , " in implode (unesc (List.tl (explode s)))" , "" , "let incr_lineno (lexbuf:Lexing.lexbuf) : unit =" , " let pos = lexbuf.lex_curr_p in" , " lexbuf.lex_curr_p <- { pos with" , " pos_lnum = pos.pos_lnum + 1;" , " pos_bol = pos.pos_cnum;" , " }" , "}" ] ] ] -- | Set up hashtables for reserved symbols and words. hashtables :: CF -> [[String]] hashtables cf = [ ht "symbol_table" $ unicodeAndSymbols cf , ht "resword_table" $ asciiKeywords cf ] where ht :: String -> [String] -> [String] ht table syms = unless (null syms) $ [ unwords [ "let", table, "= Hashtbl.create", show (length syms) ] , unwords [ "let _ = List.iter (fun (kwd, tok) -> Hashtbl.add", table, "kwd tok)" ] , concat [ " [", concat (List.intersperse ";" keyvals), "]" ] ] where keyvals = map (\ s -> concat [ "(", mkEsc s, ", ", terminal cf s, ")" ]) syms cMacros :: [String] cMacros = [ "(* BNFC character classes *)" , "let _letter = ['a'-'z' 'A'-'Z' '\\192' - '\\255'] # ['\\215' '\\247'] (* isolatin1 letter FIXME *)" , "let _upper = ['A'-'Z' '\\192'-'\\221'] # '\\215' (* capital isolatin1 letter FIXME *)" , "let _lower = ['a'-'z' '\\222'-'\\255'] # '\\247' (* small isolatin1 letter FIXME *)" , "let _digit = ['0'-'9'] (* _digit *)" , "let _idchar = _letter | _digit | ['_' '\\''] (* identifier character *)" , "let _universal = _ (* universal: any character *)" ] rMacros :: CF -> [String] rMacros cf | null symbs = [] | otherwise = [ "(* reserved words consisting of special symbols *)" , unwords $ "let rsyms =" : List.intersperse "|" (map mkEsc symbs) ] where symbs = unicodeAndSymbols cf -- user macros, derived from the user-defined tokens uMacros :: CF -> [String] uMacros cf = if null res then [] else "(* user-defined token types *)" : res where res = ["let " ++ name ++ " = " ++ rep | (name, rep, _, _) <- userTokens cf] -- | Returns the tuple of @(reg_name, reg_representation, token_name, is_position_token)@. userTokens :: CF -> [(String, String, String, Bool)] userTokens cf = [ (ocamlTokenName name, printRegOCaml reg, name, pos) | TokenReg n pos reg <- cfgPragmas cf , let name = wpThing n ] -- | Make OCamlLex rule -- >>> mkRule "token" [("REGEX1","ACTION1"),("REGULAREXPRESSION2","ACTION2"),("...","...")] -- (* lexing rules *) -- rule token = -- parse REGEX1 { ACTION1 } -- | REGULAREXPRESSION2 -- { ACTION2 } -- | ... { ... } -- -- If no regex are given, we dont create a lexer rule: -- >>> mkRule "empty" [] -- mkRule :: Doc -> [(Doc,Doc)] -> Doc mkRule _ [] = empty mkRule entrypoint (r:rs) = vcat [ "(* lexing rules *)" , "rule" <+> entrypoint <+> "=" , nest 2 $ hang "parse" 4 $ vcat $ nest 2 (mkOne r) : map (("|" <+>) . mkOne) rs ] where mkOne (regex, action) = regex $$ nest 8 (hsep ["{", action, "}"]) -- | Create regex for single line comments -- >>> mkRegexSingleLineComment "--" -- "--" (_ # '\n')* -- >>> mkRegexSingleLineComment "\"" -- "\"" (_ # '\n')* mkRegexSingleLineComment :: String -> Doc mkRegexSingleLineComment s = cstring s <+> "(_ # '\\n')*" -- | Create regex for multiline comments. -- >>> mkRegexMultilineComment "" -- "" ] -- | >>> tag "test" -- "" tag :: String -> String tag s = "<" ++ s ++ ">" element :: String -> [String] -> String element t ts = tag ("!ELEMENT " ++ t ++ " " ++ alts ts) attlist t a = tag ("!ATTLIST " ++ t ++ " " ++ a ++ " CDATA #REQUIRED") elemAtt t a ts = element t ts ++++ attlist t a elemt t = elemAtt t "name" elemc :: Cat -> [(Fun, String)] -> String elemc cat fs = unlines $ element (prettyShow cat) (map snd fs) : [element f [] | (f,_) <- fs] elemEmp :: String -> String elemEmp t = elemAtt t "value" [] alts :: [String] -> String alts ts = if null ts then "EMPTY" else parenth $ intercalate " | " ts -- choose between these two encodings: elemData b = if b then elemDataConstr else elemDataNotyp efunDef b = if b then efunDefConstr else efunDefNotyp endtagDef b = if b then endtagDefConstr else endtagDefNotyp -- coding 0: ---- not finished -- to show both types and constructors as tags; -- lengthy, but validation guarantees type correctness -- flag -xmlt elemDataConstrs cf (cat,fcs) = elemc cat [(f,rhsCat cf f cs) | (f,cs) <- fcs] efunDefConstrs :: String efunDefConstrs = "elemFun i t x = [P.replicate (i+i) ' ' ++ tag t ++ \" \" ++ etag x]" endtagDefConstrs :: String endtagDefConstrs = "endtag _ c = tag (\"/\" ++ c)" -- coding 1: -- to show constructors as empty tags; -- shorter than 0, but validation still guarantees type correctness -- flag -xmlt elemDataConstr cf (cat,fcs) = elemc cat [(f,rhsCat cf f cs) | (f,cs) <- fcs] efunDefConstr = "elemFun i t x = [P.replicate (i+i) ' ' ++ tag t ++ \" \" ++ etag x]" endtagDefConstr = "endtag _ c = tag (\"/\" ++ c)" -- coding 2: -- constructors as tags, no types. -- clumsy DTD, but nice trees. Validation guarantees type correctness -- flag -xml elemDataNotyp cf (_,fcs) = unlines [element f [rhsCatNot cf cs] | (f,cs) <- fcs] efunDefNotyp = "elemFun i _ x = [P.replicate (i+i) ' ' ++ tag x]" endtagDefNotyp = "endtag f _ = tag (\"/\" ++ f)" -- to show constructors as attributes; -- nice, but validation does not guarantee type correctness. -- Therefore rejected. -- elemDataAttr cf (cat,fcs) = elemt cat (nub [rhsCat cf cs | (_,cs) <- fcs]) -- efunDefAttr = "elemFun i t x = [replicate (i+i) ' ' ++ tag (t ++ \" name = \" ++ x)]" rhsCat :: CF -> Fun -> [Cat] -> String rhsCat cf fun cs = parenth (intercalate ", " (fun : map (render . symbCat cf) cs)) rhsCatNot cf cs = if null cs then "EMPTY" else intercalate ", " (map (render . symbCatNot cf) cs) symbCat :: CF -> Cat -> Doc symbCat cf c | isList c = pretty (normCatOfList c) <> if isEmptyListCat cf c then "*" else "+" | otherwise = pretty c symbCatNot :: CF -> Cat -> Doc symbCatNot cf c | isList c = funs (normCatOfList c) <> if isEmptyListCat cf c then "*" else "+" | otherwise = funs c where funs k = case lookup k (cf2data cf) of Just [] -> "EMPTY" Just fcs -> parens $ sep $ punctuate "|" $ map (text . fst) fcs _ -> parens $ pretty k parenth s = "(" ++ s ++ ")" -- derive an XML printer from a BNF grammar cf2XMLPrinter :: Bool -> SharedOptions -> String -> CF -> String cf2XMLPrinter typ opts absMod cf = unlines [ "{-# LANGUAGE LambdaCase #-}", pragmas opts, prologue typ opts absMod, integerRule cf, doubleRule cf, stringRule cf, if hasIdent cf then identRule cf else "", unlines [ownPrintRule cf own | (own,_) <- tokenPragmas cf], rules cf ] pragmas :: SharedOptions -> String pragmas opts = if target opts == TargetHaskellGadt then "{-# LANGUAGE FlexibleInstances, GADTs #-}" else "" prologue :: Bool -> SharedOptions -> String -> String prologue b opts _ = unlines [ "-- Pretty printing to XML", "", "module " ++ xmlFileM opts +++ "where", "", "import Prelude", " ( Char, Double, Integer, String", " , (.), ($), (+), (++)", " )", "import qualified Prelude as P", " ( Show(..), Int", " , concat, concatMap, replicate, unlines", " )", "", "import " ++ absFileM opts, "", "-- the top-level printing method", "printXML :: XPrint a => a -> String", "printXML = render . prt 0", "", "render :: [String] -> String", "render = P.unlines", "", "-- the printer class does the job", "class XPrint a where", " prt :: P.Int -> a -> [String]", " prtList :: P.Int -> [a] -> [String]", " prtList = P.concatMap . prt", "", "instance XPrint a => XPrint [a] where", " prt = prtList", "", "tag, etag :: String -> String", "tag t = \"<\" ++ t ++ \">\"", "etag t = \"<\" ++ t ++ \"/>\"", "", "elemTok, elemTokS :: P.Show a => P.Int -> String -> a -> [String]", "elemTok i t x = [P.replicate (i+i) ' ' ++ tag (t ++ \" value = \" ++ P.show x ++ \" /\")]", "elemTokS i t x = elemTok i t (P.show x)", "", "elemFun :: P.Int -> String -> String -> [String]", efunDef b, "", "endtag :: String -> String -> String", endtagDef b, "" ] integerRule cf = showsPrintRule cf "Integer" doubleRule cf = showsPrintRule cf "Double" stringRule cf = showsPrintRule cf "Char" ++++ " prtList i xs = elemTok i \"String\" xs" showsPrintRule _ t = unlines [ "instance XPrint " ++ t ++ " where", " prt i x = elemTokS i" +++ "\"" ++ t ++ "\"" +++ "x" ] identRule cf = ownPrintRule cf catIdent ownPrintRule :: CF -> TokenCat -> String ownPrintRule cf cat = unlines $ [ "instance XPrint " ++ cat ++ " where" , " prt i (" ++ cat ++ posn ++ ") = elemTok i" +++ "\"" ++ cat ++ "\"" +++ "x" ] where posn = if isPositionCat cf cat then " (_,x)" else " x" rules :: CF -> String rules cf = unlines $ map (\ (s, xs) -> case_fun s (map (second toArgs) xs)) $ cf2data cf where toArgs args = names (map (catToVar ["prt"]) args) (0 :: Int) names [] _ = [] names (x:xs) n | x `elem` xs = (x ++ show n) : names xs (n+1) | otherwise = x : names xs n case_fun :: Cat -> [(String, [String])] -> String case_fun cat xs = unlines $ concat [ [ "instance XPrint" +++ s +++ "where" , " prt i'" +++ "= \\case" ] , (`map` xs) $ \ (c, xx) -> " " ++ c +++ unwords xx +++ "-> P.concat $ " +++ "elemFun i' \"" ++ s ++ "\" \"" ++ c ++ "\"" +++ unwords [": prt (i'+1)" +++ x | x <- xx] +++ ":" +++ "[[P.replicate (i'+i') ' ' ++ endtag \"" ++ c ++ "\" \"" ++ s ++ "\"]]" ] where s = prettyShow cat BNFC-2.9.5/src/BNFC/CF.hs0000644000000000000000000006350107346545000012560 0ustar0000000000000000{-# LANGUAGE DeriveTraversable #-} -- implies DeriveFunctor, DeriveFoldable {-# LANGUAGE FlexibleInstances #-} -- implies TypeSynonymInstances {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE PatternGuards #-} {-# LANGUAGE PatternSynonyms #-} {-# LANGUAGE RecordWildCards #-} {- BNF Converter: Abstract syntax Copyright (C) 2004 Author: Markus Forsberg, Michael Pellauer, Aarne Ranta Copyright (C) 2017-2021 Andreas Abel -} module BNFC.CF where import Prelude hiding ((<>)) import Control.Arrow ( (&&&) ) import Control.Monad ( guard ) import Data.Char import Data.Ord ( Down(..) ) import qualified Data.Either as Either import Data.Function ( on ) import Data.List ( nub, sort, group ) import qualified Data.List as List import Data.List.NonEmpty (pattern (:|)) import qualified Data.List.NonEmpty as List1 import Data.Maybe import Data.Map (Map) import qualified Data.Map as Map import Data.Set (Set) import qualified Data.Set as Set import Data.String (IsString(..)) import BNFC.Abs (Reg()) import BNFC.Par (pCat) import BNFC.Lex (tokens) import qualified BNFC.Abs as Abs import BNFC.PrettyPrint import BNFC.Utils (spanEnd) type List1 = List1.NonEmpty -- | A context free grammar consists of a set of rules and some extended -- information (e.g. pragmas, literals, symbols, keywords). type CF = CFG RFun -- | A rule consists of a function name, a main category and a sequence of -- terminals and non-terminals. -- -- @ -- function_name . Main_Cat ::= sequence -- @ type Rule = Rul RFun -- | Polymorphic rule type. -- N.B.: Was originally made polymorphic for the sake of removed backend --profile. data Rul function = Rule { funRule :: function -- ^ The function (semantic action) of a rule. -- In order to be able to generate data types this must be a constructor -- (or an identity function). , valRCat :: RCat -- ^ The value category, i.e., the defined non-terminal. , rhsRule :: SentForm -- ^ The sentential form, i.e., -- the list of (non)terminals in the right-hand-side of a rule. , internal :: InternalRule -- ^ Is this an "internal" rule only for the AST and printing, -- not for parsing? } deriving (Eq, Functor) data InternalRule = Internal -- ^ @internal@ rule (only for AST & printer) | Parsable -- ^ ordinary rule (also for parser) deriving (Eq) instance Pretty function => Pretty (Rul function) where pretty (Rule f cat rhs internal) = (if internal == Internal then ("internal" <+>) else id) $ pretty f <> "." <+> pretty cat <+> "::=" <+> sep (map (either pretty (text . show)) rhs) -- | A sentential form is a sequence of non-terminals or terminals. type SentForm = [Either Cat String] -- | Type of context-free grammars (GFG). data CFG function = CFG { cfgPragmas :: [Pragma] , cfgUsedCats :: Set Cat -- ^ Categories used by the parser. , cfgLiterals :: [Literal] -- ^ @Char, String, Ident, Integer, Double@. -- @String@s are quoted strings, -- and @Ident@s are unquoted. , cfgSymbols :: [Symbol] -- ^ Symbols in the grammar, e.g. “*”, “->”. , cfgKeywords :: [KeyWord] -- ^ Reserved words, e.g. @if@, @while@. , cfgReversibleCats :: [Cat] -- ^ Categories that can be made left-recursive. , cfgRules :: [Rul function] , cfgSignature :: Signature -- ^ Types of rule labels, computed from 'cfgRules'. } deriving (Functor) -- instance (Show function) => Show (CFG function) where -- show CFG{..} = unlines $ map show cfgRules -- | Types of the rule labels, together with the position of the rule label. type Signature = Map String (WithPosition Type) -- | Type of a non-terminal. type Base = Base' String data Base' a = BaseT a | ListT (Base' a) deriving (Eq, Ord, Functor) -- | Type of a rule label. data Type = FunT [Base] Base deriving (Eq, Ord) -- | Placeholder for a type. dummyBase :: Base dummyBase = BaseT "" -- | Placeholder for a function type. dummyType :: Type dummyType = FunT [] dummyBase instance Show Base where show (BaseT x) = x show (ListT t) = "[" ++ show t ++ "]" instance Show Type where show (FunT ts t) = unwords $ map show ts ++ ["->", show t] -- | Expressions for function definitions. data Exp' f = App f Type [Exp' f] -- ^ (Possibly defined) label applied to expressions. -- The function 'Type' is inferred by the type checker. | Var String -- ^ Function parameter. | LitInt Integer | LitDouble Double | LitChar Char | LitString String deriving (Eq) type Exp = Exp' String instance (IsFun f, Pretty f) => Pretty (Exp' f) where prettyPrec p e = case listView e of Right es -> brackets $ hcat $ punctuate ", " $ map (prettyPrec 0) es Left (Var x) -> text x Left (App f _ []) -> prettyPrec p f Left (App f _ [e1,e2]) | isConsFun f -> parensIf (p > 0) $ hsep [ prettyPrec 1 e1, ":", prettyPrec 0 e2 ] Left (App f _ es) -> parensIf (p > 1) $ hsep $ prettyPrec 1 f : map (prettyPrec 2) es Left (LitInt n) -> (text . show) n Left (LitDouble x) -> (text . show) x Left (LitChar c) -> (text . show) c Left (LitString s) -> (text . show) s where listView (App f _ []) | isNilFun f = Right [] listView (App f _ [e1,e2]) | isConsFun f , Right es <- listView e2 = Right $ e1:es listView e0 = Left e0 -- | Pragmas. data Pragma = CommentS String -- ^ for single line comments | CommentM (String, String) -- ^ for multiple-line comments. | TokenReg RString Bool Reg -- ^ for tokens | EntryPoints [RCat] | Layout LayoutKeyWords | LayoutStop [KeyWord] | LayoutTop Symbol -- ^ Separator for top-level layout. | FunDef Define data Define = Define { defName :: RFun , defArgs :: Telescope -- ^ Argument types inferred by the type checker. , defBody :: Exp , defType :: Base -- ^ Type of the body, inferred by the type checker. } -- | Function arguments with type. type Telescope = [(String, Base)] -- | For use with 'partitionEithers'. isFunDef :: Pragma -> Either Pragma Define isFunDef = \case FunDef d -> Right d p -> Left p -- | All 'define' pragmas of the grammar. definitions :: CFG f -> [Define] definitions cf = [ def | FunDef def <- cfgPragmas cf ] ------------------------------------------------------------------------------ -- Layout ------------------------------------------------------------------------------ type LayoutKeyWords = [(KeyWord, Delimiters)] -- | List delimiters. data Delimiters = Delimiters { listSep :: Symbol -- ^ List separator. , listOpen :: Symbol -- ^ List opening delimiter. , listClose :: Symbol -- ^ List closing delimiter. } deriving Show -- | User-defined regular expression tokens tokenPragmas :: CFG f -> [(TokenCat,Reg)] tokenPragmas cf = [ (wpThing name, e) | TokenReg name _ e <- cfgPragmas cf ] -- | The names of all user-defined tokens. tokenNames :: CFG f -> [String] tokenNames cf = map fst (tokenPragmas cf) layoutPragmas :: CF -> (Maybe Symbol, LayoutKeyWords, [KeyWord]) layoutPragmas cf = ( listToMaybe [ sep | LayoutTop sep <- ps ] -- if there's top-level layout , concat [ kws | Layout kws <- ps ] -- layout-block inducing words , concat [ kws | LayoutStop kws <- ps ] -- layout-block aborting words ) where ps = cfgPragmas cf hasLayout_ :: (Maybe Symbol, LayoutKeyWords, [KeyWord]) -> Bool hasLayout_ (top, kws, _) = isJust top || not (null kws) -- (True,[],_) means: top-level layout only hasLayout :: CF -> Bool hasLayout = hasLayout_ . layoutPragmas -- | Literal: builtin-token types Char, String, Ident, Integer, Double. type Literal = String type Symbol = String type KeyWord = String ------------------------------------------------------------------------------ -- Identifiers with position information ------------------------------------------------------------------------------ -- | Source positions. data Position = NoPosition | Position { posFile :: FilePath -- ^ Name of the grammar file. , posLine :: Int -- ^ Line in the grammar file. , posColumn :: Int -- ^ Column in the grammar file. } deriving (Show, Eq, Ord) prettyPosition :: Position -> String prettyPosition = \case NoPosition -> "" Position file line col -> List.intercalate ":" [ file, show line, show col, "" ] data WithPosition a = WithPosition { wpPosition :: Position , wpThing :: a } deriving (Show, Functor, Foldable, Traversable) -- | Ignore position in equality and ordering. instance Eq a => Eq (WithPosition a) where (==) = (==) `on` wpThing instance Ord a => Ord (WithPosition a) where compare = compare `on` wpThing instance Pretty a => Pretty (WithPosition a) where pretty = pretty . wpThing noPosition :: a -> WithPosition a noPosition = WithPosition NoPosition -- | A "ranged string" (terminology from Agda code base). type RString = WithPosition String -- | Prefix string with pretty-printed position information. blendInPosition :: RString -> String blendInPosition (WithPosition pos msg) | null p = msg | otherwise = unwords [ p, msg ] where p = prettyPosition pos type RCat = WithPosition Cat valCat :: Rul fun -> Cat valCat = wpThing . valRCat npRule :: Fun -> Cat -> SentForm -> InternalRule -> Rule npRule f c r internal = Rule (noPosition f) (noPosition c) r internal npIdentifier :: String -> Abs.Identifier npIdentifier x = Abs.Identifier ((0, 0), x) -- identifierName :: Identifier -> String -- identifierName (Identifier (_, x)) = x -- identifierPosition :: String -> Identifier -> Position -- identifierPosition file (Identifier ((line, col), _)) = Position file line col ------------------------------------------------------------------------------ -- Categories ------------------------------------------------------------------------------ -- | Categories are the non-terminals of the grammar. data Cat = Cat String -- ^ Ordinary non-terminal. | TokenCat TokenCat -- ^ Token types (like @Ident@, @Integer@, ..., user-defined). | ListCat Cat -- ^ List non-terminals, e.g., @[Ident]@, @[Exp]@, @[Exp1]@. | CoercCat String Integer -- ^ E.g. @Exp1@, @Exp2@. deriving (Eq, Ord) type TokenCat = String type BaseCat = String -- An alias for Cat used in many backends: type NonTerminal = Cat -- | Render category symbols as strings catToStr :: Cat -> String catToStr = \case Cat s -> s TokenCat s -> s ListCat c -> "[" ++ catToStr c ++ "]" CoercCat s i -> s ++ show i -- | This instance is for the Hspec test suite. instance Show Cat where show = catToStr instance Pretty Cat where pretty = \case Cat s -> text s TokenCat s -> text s ListCat c -> brackets $ pretty c CoercCat s i -> text s <> pretty i -- | Reads a string into a category. This should only need to handle -- the case of simple categories (with or without coercion) since list -- categories are parsed in the grammar already. To be on the safe side here, -- we still call the parser function that parses categries. strToCat :: String -> Cat strToCat s = case pCat (tokens s) of Right c -> cat2cat c Left _ -> Cat s -- error $ "Error parsing cat " ++ s ++ " (" ++ e ++ ")" -- Might be one of the "Internal cat" which are not -- really parsable... where cat2cat = \case Abs.IdCat (Abs.Identifier (_pos, x)) | null ds -> if c `elem` specialCatsP then TokenCat c else Cat c | otherwise -> CoercCat c (read ds) where (ds, c) = spanEnd isDigit x Abs.ListCat c -> ListCat (cat2cat c) -- Build-in categories contants catString, catInteger, catDouble, catChar, catIdent :: TokenCat catString = "String" catInteger = "Integer" catDouble = "Double" catChar = "Char" catIdent = "Ident" -- | Token categories corresponding to base types. baseTokenCatNames :: [TokenCat] baseTokenCatNames = [ catChar, catDouble, catInteger, catString ] -- all literals -- the parser needs these specialCatsP :: [TokenCat] specialCatsP = catIdent : baseTokenCatNames -- | Does the category correspond to a data type? isDataCat :: Cat -> Bool isDataCat c = isDataOrListCat c && not (isList c) isDataOrListCat :: Cat -> Bool isDataOrListCat (CoercCat _ _) = False isDataOrListCat (Cat ('@':_)) = False isDataOrListCat (ListCat c) = isDataOrListCat c isDataOrListCat _ = True -- | Categories C1, C2,... (one digit at the end) are variants of C. This function -- returns true if two category are variants of the same abstract category. -- E.g. -- -- >>> sameCat (Cat "Abc") (CoercCat "Abc" 44) -- True sameCat :: Cat -> Cat -> Bool sameCat = (==) `on` normCat -- | Removes precedence information. C1 => C, [C2] => [C] normCat :: Cat -> Cat normCat (ListCat c) = ListCat (normCat c) normCat (CoercCat c _) = Cat c normCat c = c normCatOfList :: Cat -> Cat normCatOfList = normCat . catOfList -- | When given a list Cat, i.e. '[C]', it removes the square -- brackets, and adds the prefix List, i.e. 'ListC'. (for Happy and -- Latex) identCat :: Cat -> String identCat (ListCat c) = "List" ++ identCat c identCat c = catToStr c identType :: Base -> String identType (ListT t) = "List" ++ identType t identType (BaseT s) = s -- | Reconstruct (non-coercion) category from a type, given a list of -- what should be the token categories. catOfType :: [TokenCat] -> Base -> Cat catOfType tk = \case ListT t -> ListCat $ catOfType tk t BaseT s | s `elem` tk -> TokenCat s | otherwise -> Cat s isList :: Cat -> Bool isList (ListCat _) = True isList _ = False -- | Get the underlying category identifier. baseCat :: Cat -> Either BaseCat TokenCat baseCat = \case ListCat c -> baseCat c CoercCat x _ -> Left x Cat x -> Left x TokenCat x -> Right x isTokenCat :: Cat -> Bool isTokenCat (TokenCat _) = True isTokenCat _ = False maybeTokenCat :: Cat -> Maybe TokenCat maybeTokenCat = \case TokenCat c -> Just c _ -> Nothing -- | Unwraps the list constructor from the category name. -- E.g. @[C1] => C1@. catOfList :: Cat -> Cat catOfList (ListCat c) = c catOfList c = c ------------------------------------------------------------------------------ -- Functions ------------------------------------------------------------------------------ -- | Fun is the function name of a rule. type Fun = String type RFun = RString instance IsString RFun where fromString = noPosition class IsFun a where funName :: a -> String isNilFun :: a -> Bool -- ^ Is this the constructor for empty lists? isOneFun :: a -> Bool -- ^ Is this the constructor for singleton lists? isConsFun :: a -> Bool -- ^ Is this the list constructor? isConcatFun :: a -> Bool -- ^ Is this list concatenation? -- | Is this function just a coercion? (I.e. the identity) isCoercion :: a -> Bool isNilFun = funNameSatisfies (== "[]") isOneFun = funNameSatisfies (== "(:[])") isConsFun = funNameSatisfies (== "(:)") isConcatFun = funNameSatisfies (== "(++)") isCoercion = funNameSatisfies (== "_") instance IsFun String where funName = id instance IsFun a => IsFun (WithPosition a) where funName = funName . wpThing instance IsFun a => IsFun (Rul a) where funName = funName . funRule instance IsFun a => IsFun (k, a) where funName = funName . snd funNameSatisfies :: IsFun a => (String -> Bool) -> a -> Bool funNameSatisfies f = f . funName isDefinedRule :: IsFun a => a -> Bool isDefinedRule = funNameSatisfies $ \case (x:_) -> isLower x [] -> error "isDefinedRule: empty function name" -- not coercion or defined rule isProperLabel :: IsFun a => a -> Bool isProperLabel f = not (isCoercion f || isDefinedRule f) isNilCons :: IsFun a => a -> Bool isNilCons f = isNilFun f || isOneFun f || isConsFun f || isConcatFun f ------------------------------------------------------------------------------ -- | The abstract syntax of a grammar. type Data = (Cat, [(String, [Cat])]) -- | @firstEntry@ returns the first of the @entrypoints@, -- or (if none), the first parsable @Cat@egory appearing in the grammar. firstEntry :: CF -> Cat firstEntry cf = List1.head $ allEntryPoints cf -- aggressively ban nonunique names (AR 31/5/2012) -- | Constructors and categories. allNames :: CF -> [RString] allNames cf = [ f | f <- map funRule $ cfgRules cf , not $ isNilCons f , not $ isCoercion f ] ++ allCatsIdNorm cf -- Put the categories after the labels so that the error location -- for a non-unique name is at the label rather than the category. -- | Get all elements with more than one occurrence. filterNonUnique :: (Ord a) => [a] -> [a] filterNonUnique xs = [ x | (x:_:_) <- group $ sort xs ] -- | Extract the comment pragmas. commentPragmas :: [Pragma] -> [Pragma] commentPragmas = filter isComment where isComment (CommentS _) = True isComment (CommentM _) = True isComment _ = False lookupRule :: Eq f => f -> [Rul f] -> Maybe (Cat, SentForm) lookupRule f = lookup f . map unRule where unRule (Rule f' c rhs _internal) = (f', (wpThing c, rhs)) -- | Returns all parseable rules that construct the given Cat. -- Whitespace separators have been removed. rulesForCat :: CF -> Cat -> [Rule] rulesForCat cf cat = [ removeWhiteSpaceSeparators r | r <- cfgRules cf, isParsable r, valCat r == cat] removeWhiteSpaceSeparators :: Rul f -> Rul f removeWhiteSpaceSeparators = mapRhs $ mapMaybe $ either (Just . Left) $ \ sep -> if all isSpace sep then Nothing else Just (Right sep) -- | Modify the 'rhsRule' part of a 'Rule'. mapRhs :: (SentForm -> SentForm) -> Rul f -> Rul f mapRhs f r = r { rhsRule = f $ rhsRule r } -- | Like rulesForCat but for normalized value categories. -- I.e., `rulesForCat (Cat "Exp")` will return rules for category Exp but also -- Exp1, Exp2... in case of coercion rulesForNormalizedCat :: CF -> Cat -> [Rule] rulesForNormalizedCat cf cat = [r | r <- cfgRules cf, normCat (valCat r) == cat] -- | As rulesForCat, but this version doesn't exclude internal rules. rulesForCat' :: CF -> Cat -> [Rule] rulesForCat' cf cat = [r | r <- cfgRules cf, valCat r == cat] -- | Get all categories of a grammar matching the filter. -- (No Cat w/o production returned; no duplicates.) allCats :: (InternalRule -> Bool) -> CFG f -> [Cat] allCats pred = nub . map valCat . filter (pred . internal) . cfgRules -- | Get all categories of a grammar. -- (No Cat w/o production returned; no duplicates.) reallyAllCats :: CFG f -> [Cat] reallyAllCats = allCats $ const True allParserCats :: CFG f -> [Cat] allParserCats = allCats (== Parsable) -- | Gets all normalized identified Categories allCatsIdNorm :: CF -> [RString] allCatsIdNorm = nub . map (fmap (identCat . normCat) . valRCat) . cfgRules -- | Get all normalized Cat allCatsNorm :: CF -> [Cat] allCatsNorm = nub . map (normCat . valCat) . cfgRules -- | Get all normalized Cat allParserCatsNorm :: CFG f -> [Cat] allParserCatsNorm = nub . map normCat . allParserCats -- | Is the category is used on an rhs? -- Includes internal rules. isUsedCat :: CFG f -> Cat -> Bool isUsedCat cf = (`Set.member` cfgUsedCats cf) -- | Group all parsable categories with their rules. -- Deletes whitespace separators, as they will not become part of the parsing rules. ruleGroups :: CF -> [(Cat,[Rule])] ruleGroups cf = [(c, rulesForCat cf c) | c <- allParserCats cf] -- | Group all categories with their rules including internal rules. ruleGroupsInternals :: CF -> [(Cat,[Rule])] ruleGroupsInternals cf = [(c, rulesForCat' cf c) | c <- reallyAllCats cf] -- | Get all literals of a grammar. (e.g. String, Double) literals :: CFG f -> [TokenCat] literals cf = cfgLiterals cf ++ map fst (tokenPragmas cf) -- | Get the keywords of a grammar. reservedWords :: CFG f -> [String] reservedWords = sort . cfgKeywords -- | Canonical, numbered list of symbols and reserved words. (These do -- not end up in the AST.) cfTokens :: CFG f -> [(String,Int)] cfTokens cf = zip (sort (cfgSymbols cf ++ reservedWords cf)) [1..] -- NOTE: some backends (incl. Haskell) assume that this list is sorted. -- | Comments can be defined by the 'comment' pragma comments :: CF -> ([(String,String)],[String]) comments cf = ([p | CommentM p <- xs], [s | CommentS s <- xs]) where xs = commentPragmas (cfgPragmas cf) -- | Number of block comment forms defined in the grammar file. numberOfBlockCommentForms :: CF -> Int numberOfBlockCommentForms = length . fst . comments -- built-in categories (corresponds to lexer) -- | Whether the grammar uses the predefined Ident type. hasIdent :: CFG f -> Bool hasIdent cf = isUsedCat cf $ TokenCat catIdent -- these need new datatypes -- | Categories corresponding to tokens. These end up in the -- AST. (unlike tokens returned by 'cfTokens') specialCats :: CF -> [TokenCat] specialCats cf = (if hasIdent cf then (catIdent:) else id) (map fst (tokenPragmas cf)) -- * abstract syntax trees: data type definitions -- -- The abstract syntax, instantiated by the Data type, is the type signatures -- of all the constructors. -- | Return the abstract syntax of the grammar. -- All categories are normalized, so a rule like: -- EAdd . Exp2 ::= Exp2 "+" Exp3 ; -- Will give the following signature: EAdd : Exp -> Exp -> Exp getAbstractSyntax :: CF -> [Data] getAbstractSyntax cf = [ ( c, nub (constructors c) ) | c <- allCatsNorm cf ] where constructors cat = do rule <- cfgRules cf let f = funRule rule guard $ not (isDefinedRule f) guard $ not (isCoercion f) guard $ normCat (valCat rule) == cat let cs = [normCat c | Left c <- rhsRule rule ] return (wpThing f, cs) -- | All the functions below implement the idea of getting the -- abstract syntax of the grammar with some variation but they seem to do a -- poor job at handling corner cases involving coercions. -- Use 'getAbstractSyntax' instead if possible. cf2data' :: (Cat -> Bool) -> CF -> [Data] cf2data' predicate cf = [(cat, nub (map mkData [r | r <- cfgRules cf, let f = funRule r, not (isDefinedRule f), not (isCoercion f), cat == normCat (valCat r)])) | cat <- nub $ map normCat $ filter predicate $ reallyAllCats cf ] where mkData (Rule f _ its _) = (wpThing f, [normCat c | Left c <- its ]) -- translates a grammar to a Data object. cf2data :: CF -> [Data] cf2data = cf2data' $ isDataCat . normCat -- translates to a Data with List categories included. cf2dataLists :: CF -> [Data] cf2dataLists = cf2data' $ isDataOrListCat . normCat specialData :: CF -> [Data] specialData cf = [(TokenCat name, [(name, [TokenCat catString])]) | name <- specialCats cf] -- | Get the type of a rule label. sigLookup :: IsFun a => a -> CF -> Maybe (WithPosition Type) sigLookup f = Map.lookup (funName f) . cfgSignature -- | Checks if the rule is parsable. isParsable :: Rul f -> Bool isParsable = (Parsable ==) . internal hasNilRule :: [Rule] -> Maybe Rule hasNilRule = List.find isNilFun -- | Gets the singleton rule out of the rules for a list. hasSingletonRule :: [Rule] -> Maybe Rule hasSingletonRule = List.find isOneFun -- | Sort rules by descending precedence. sortRulesByPrecedence :: [Rule] -> [(Integer,Rule)] sortRulesByPrecedence = List.sortOn (Down . fst) . map (precRule &&& id) -- | Is the given category a list category parsing also empty lists? isEmptyListCat :: CF -> Cat -> Bool isEmptyListCat cf = any isNilFun . rulesForCat' cf isNonterm :: Either Cat String -> Bool isNonterm = Either.isLeft -- used in Happy to parse lists of form 'C t [C]' in reverse order -- applies only if the [] rule has no terminals revSepListRule :: Rul f -> Rul f revSepListRule (Rule f c ts internal) = Rule f c (xs : x : sep) internal where (x,sep,xs) = (head ts, init (tail ts), last ts) -- invariant: test in findAllReversibleCats have been performed findAllReversibleCats :: CF -> [Cat] findAllReversibleCats cf = [c | (c,r) <- ruleGroups cf, isRev c r] where isRev c = \case [r1,r2] | isList c -> if isConsFun (funRule r2) then tryRev r2 r1 else isConsFun (funRule r1) && tryRev r1 r2 _ -> False tryRev (Rule f _ ts@(x:_:_) _) r = isEmptyNilRule r && isConsFun f && isNonterm x && isNonterm (last ts) tryRev _ _ = False isEmptyNilRule :: IsFun a => Rul a -> Bool isEmptyNilRule (Rule f _ ts _) = isNilFun f && null ts -- | Returns the precedence of a category symbol. -- E.g. -- >>> precCat (CoercCat "Abc" 4) -- 4 precCat :: Cat -> Integer precCat (CoercCat _ i) = i precCat (ListCat c) = precCat c precCat _ = 0 precRule :: Rul f -> Integer precRule = precCat . valCat -- | Defines or uses the grammar token types like @Ident@? -- Excludes position tokens. hasIdentLikeTokens :: CFG g -> Bool hasIdentLikeTokens cf = hasIdent cf || or [ not b | TokenReg _ b _ <- cfgPragmas cf ] -- | Defines or uses the grammar @token@ types or @Ident@? hasTextualTokens :: CFG g -> Bool hasTextualTokens cf = hasIdent cf || or [ True | TokenReg{} <- cfgPragmas cf ] -- | Is there a @position token@ declaration in the grammar? hasPositionTokens :: CFG g -> Bool hasPositionTokens cf = or [ b | TokenReg _ b _ <- cfgPragmas cf ] -- | Does the category have a position stored in AST? isPositionCat :: CFG f -> TokenCat -> Bool isPositionCat cf cat = or [ b | TokenReg name b _ <- cfgPragmas cf, wpThing name == cat] -- | Categories that are entry points to the parser. -- -- These are either the declared @entrypoints@ (in the original order), -- or, if no @entrypoints@ were declared explicitly, -- all parsable categories (in the order of declaration in the grammar file). allEntryPoints :: CFG f -> List1 Cat allEntryPoints cf = case concat [ cats | EntryPoints cats <- cfgPragmas cf ] of [] -> List1.fromList $ allParserCats cf -- assumed to be non-empty c:cs -> fmap wpThing (c :| cs) BNFC-2.9.5/src/BNFC/Check/0000755000000000000000000000000007346545000012744 5ustar0000000000000000BNFC-2.9.5/src/BNFC/Check/EmptyTypes.hs0000644000000000000000000000473307346545000015432 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} {-# LANGUAGE ScopedTypeVariables #-} -- | Check whether generated AST will have empty types. -- -- Internal rules are included. -- -- We compute by a saturation algorithm which token types are used in which non-terminal. -- A non-terminal does not use any token types, we flag an empty type. module BNFC.Check.EmptyTypes (emptyData) where import Data.Maybe import Data.Map (Map) import qualified Data.List as List import qualified Data.Map as Map import BNFC.CF -- | Compute the categories that have empty data type declarations in the abstract syntax. -- Disregards list types. emptyData :: forall f. (IsFun f) => [Rul f] -> [RCat] emptyData rs = [ pc | Rule _ pc _ _ <- rs , let c = wpThing pc , not $ isList c , Left x <- [baseCat c] , maybe True List.null $ Map.lookup x ruleMap ] where ruleMap :: Map BaseCat [f] ruleMap = Map.unionsWith (++) $ (`mapMaybe` rs) $ \case Rule f pc _ _ | not (isCoercion f), Left x <- baseCat (wpThing pc) -> Just $ Map.singleton x [f] | otherwise -> Nothing -- -- STILLBORN CODE: -- type UsedTokenTypes = Map BaseCat (Set TokenCat) -- -- | Not sure what emptyCats computes: -- emptyCats :: [Rul f] -> [RCat] -- emptyCats rs = -- [ pc -- | Rule _ pc _ _ <- rs -- , let c = wpThing pc -- , not $ isList c -- , Left x <- [baseCat c] -- , maybe False Set.null $ Map.lookup x usedTokenMap -- ] -- where -- -- The computation of the UsedTokenTypes is likely correct (but untested). -- usedTokenMap = saturate Map.empty -- -- standard least fixed-point iteration from below -- saturate m = if m' == m then m' else saturate m' -- where m' = step m -- -- step is monotone! -- step :: UsedTokenTypes -> UsedTokenTypes -- step m = Map.unionsWith Set.union $ map stepRule rs -- where -- -- Compute the used tokens for a NT based on a single rule, -- -- using the information we have already for the NTs on the rhs. -- stepRule (Rule _ (WithPosition _ c0) rhs _) = -- case baseCat c0 of -- Left x -> Map.singleton x $ Set.unions $ map typesFor rhsCats -- -- The TokenCat case is actually impossible, but this is consistent: -- Right x -> Map.singleton x $ Set.singleton x -- where -- rhsCats = mapMaybe (either (Just . baseCat) (const Nothing)) rhs -- typesFor = \case -- -- Not token cat: -- Left c -> Map.findWithDefault Set.empty c m -- -- TokenCat: -- Right c -> Set.singleton c BNFC-2.9.5/src/BNFC/GetCF.hs0000644000000000000000000006415307346545000013224 0ustar0000000000000000{- BNF Converter: Abstract syntax Copyright (C) 2004 Author: Markus Forsberg, Aarne Ranta -} {-# LANGUAGE CPP #-} {-# LANGUAGE DefaultSignatures #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE PatternSynonyms #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TupleSections #-} {-# LANGUAGE TypeFamilies #-} -- for type equality ~ {-# LANGUAGE NoMonoLocalBinds #-} -- counteract TypeFamilies -- | Check LBNF input file and turn it into the 'CF' internal representation. module BNFC.GetCF ( parseCF , checkRule, transItem ) where import Control.Arrow (left) import Control.Monad.Reader (ReaderT, runReaderT, MonadReader(..), asks) import Control.Monad.State (State, evalState, get, modify) import Control.Monad.Except (MonadError(..)) import Data.Char import Data.Either (partitionEithers) import Data.Functor (($>)) -- ((<&>)) -- only from ghc 8.4 import Data.List (nub, partition) import Data.List.NonEmpty (pattern (:|)) import qualified Data.List as List import qualified Data.List.NonEmpty as List1 import Data.Maybe import Data.Set (Set) import qualified Data.Foldable as Fold import qualified Data.Set as Set import qualified Data.Map as Map import System.Exit (exitFailure) import System.IO (hPutStrLn, stderr) -- Local imports: import qualified BNFC.Abs as Abs import BNFC.Abs (Reg(RAlts)) import BNFC.Par import BNFC.CF import BNFC.Check.EmptyTypes import BNFC.Options import BNFC.PrettyPrint import BNFC.Regex (nullable, simpReg) import BNFC.TypeChecker import BNFC.Utils type Err = Either String -- $setup -- >>> import BNFC.Print -- | Entrypoint. parseCF :: SharedOptions -> Target -> String -> IO CF parseCF opts target content = do cf <- runErr $ pGrammar (myLexer content) -- <&> expandRules -- <&> from ghc 8.4 >>= return . expandRules >>= getCF opts >>= return . markTokenCategories -- Construct the typing information in 'define' expressions. cf <- either die return $ runTypeChecker $ checkDefinitions cf -- Some backends do not allow the grammar name to coincide with -- one of the category or constructor names. let names = allNames cf when (target == TargetJava) $ case List.find ((lang opts ==) . wpThing) names of Nothing -> return () Just px -> dieUnlessForce $ unlines [ unwords $ [ "ERROR of backend", show target ++ ":" , "the language name" , lang opts , "conflicts with a name defined in the grammar:" ] , blendInPosition px ] -- Some (most) backends do not support layout. let (layoutTop, layoutKeywords, _) = layoutPragmas cf let lay = isJust layoutTop || not (null layoutKeywords) when (lay && target `notElem` [ TargetHaskell, TargetHaskellGadt, TargetLatex, TargetPygments, TargetCheck ]) $ dieUnlessForce $ unwords [ "ERROR: the grammar uses layout, which is not supported by backend" , show target ++ "." ] -- A grammar that uses layout needs to contain symbols { } ; let symbols = cfgSymbols cf layoutSymbols = concat [ [";"], unless (null layoutKeywords) ["{", "}"] ] missingLayoutSymbols = filter (`notElem` symbols) layoutSymbols when (lay && not (null missingLayoutSymbols)) $ dieUnlessForce $ unwords $ "ERROR: the grammar uses layout, but does not mention" : map show missingLayoutSymbols -- Token types that end in a numeral confuse BNFC, because of CoerceCat. let userTokenTypes = [ rx | TokenReg rx _ _ <- cfgPragmas cf ] case filter (isJust . hasNumericSuffix . wpThing) userTokenTypes of [] -> return () rxs -> dieUnlessForce $ unlines $ concat [ [ "ERROR: illegal token names:" ] , printNames rxs , [ "Token names may not end with a number---to avoid confusion with coercion categories." ] ] -- Fail if grammar defines a @token@ twice. case duplicatesOn wpThing userTokenTypes of [] -> return () gs -> dieUnlessForce $ unlines $ concat [ [ "ERROR: duplicate token definitions:" ] , map printDuplicateTokenDefs gs ] where printDuplicateTokenDefs (rx :| rxs) = concat $ [ concat [ " ", wpThing rx, " at " ] , unwords $ map (prettyPosition . wpPosition) (rx : rxs) ] -- Fail if token name conflicts with category name. let userTokenNames = Map.fromList $ map (\ rx -> (wpThing rx, rx)) userTokenTypes case mapMaybe (\ rx -> (rx,) <$> Map.lookup (wpThing rx) userTokenNames) (allCatsIdNorm cf) of [] -> return () ns -> dieUnlessForce $ unlines $ concat [ [ "ERROR: these token definitions conflict with non-terminals:" ] , map (\ (rx, rp) -> " " ++ blendInPosition rp ++ " conflicts with " ++ blendInPosition rx) ns ] -- Warn or fail if the grammar uses non unique names. let nonUniqueNames = filter (not . isDefinedRule) $ filterNonUnique names case nonUniqueNames of [] -> return () ns | target `elem` [ TargetC, TargetCpp , TargetCppNoStl , TargetJava ] -> dieUnlessForce $ unlines $ concat [ [ "ERROR: names not unique:" ] , printNames ns , [ "This is an error in the backend " ++ show target ++ "." ] ] | otherwise -> putStrLn $ unlines $ concat [ [ "Warning: names not unique:" ] , printNames ns , [ "This can be an error in some backends." ] ] -- Warn or fail if the grammar uses names not unique modulo upper/lowercase. when False $ case nub $ filter (`notElem` nonUniqueNames) $ filter (not . isDefinedRule) $ concatMap List1.toList $ duplicatesOn (map toLower . wpThing) names of [] -> return () ns | target `elem` [ TargetJava ] -> dieUnlessForce $ unlines $ concat [ [ "ERROR: names not unique ignoring case:" ] , printNames ns , [ "This is an error in the backend " ++ show target ++ "."] ] | otherwise -> putStr $ unlines $ concat [ [ "Warning: names not unique ignoring case:" ] , printNames ns , [ "This can be an error in some backends." ] ] -- Note: the following @() <-@ works around an @Ambiguous type variable@ () <- when (hasPositionTokens cf && target == TargetCppNoStl) $ putStrLn $ unwords [ "Warning: the backend" , show target , "ignores the qualifier `position` in token definitions." ] -- Fail if the grammar uses defined constructors which are not actually defined. let definedConstructors = Set.fromList $ map defName $ definitions cf let undefinedConstructor x = isDefinedRule x && x `Set.notMember` definedConstructors case filter undefinedConstructor $ map funRule $ cfgRules cf of [] -> return () xs -> dieUnlessForce $ unlines $ concat [ [ "Lower case rule labels need a definition." , "ERROR: undefined rule label(s):" ] , printNames xs ] -- Print errors for empty comment deliminters unlessNull (checkComments cf) $ \ errs -> do dieUnlessForce $ unlines errs -- Print warnings if user defined nullable tokens. Fold.mapM_ dieUnlessForce $ checkTokens cf -- Check for empty grammar. let nRules = length (cfgRules cf) -- Note: the match against () is necessary for type class instance resolution. when (nRules == 0) $ dieUnlessForce $ "ERROR: the grammar contains no rules." -- Check whether one of the parsers could consume at least one token. [#213] when (null (usedTokenCats cf) && null (cfTokens cf)) $ dieUnlessForce $ "ERROR: the languages defined by this grammar are empty since it mentions no terminals." unlessNull (emptyData $ cfgRules cf) $ \ pcs -> do dieUnlessForce $ unlines $ concat [ [ "ERROR: the following categories have empty abstract syntax:" ] , printNames $ map (fmap catToStr) pcs ] -- Passed the tests: Print the number of rules. putStrLn $ show nRules +++ "rules accepted\n" return cf where runErr = either die return dieUnlessForce :: String -> IO () dieUnlessForce msg = do hPutStrLn stderr msg if force opts then do hPutStrLn stderr "Ignoring error... (thanks to --force)" else do hPutStrLn stderr "Aborting. (Use option --force to continue despite errors.)" exitFailure -- | All token categories used in the grammar. -- Includes internal rules. usedTokenCats :: CFG f -> [TokenCat] usedTokenCats cf = [ c | Rule _ _ rhs _ <- cfgRules cf, Left (TokenCat c) <- rhs ] -- | Print vertical list of names with position sorted by position. printNames :: [RString] -> [String] printNames = map ((" " ++) . blendInPosition) . List.sortOn lexicoGraphic where lexicoGraphic (WithPosition pos x) = (pos,x) die :: String -> IO a die msg = do hPutStrLn stderr msg exitFailure -- | Translate the parsed grammar file into a context-free grammar 'CF'. -- Desugars and type-checks. getCF :: SharedOptions -> Abs.Grammar -> Err CF getCF opts (Abs.Grammar defs) = do (pragma, rules) <- partitionEithers . concat <$> mapM transDef defs `runTrans` opts let reservedWords = nub [ t | r <- rules, isParsable r, Right t <- rhsRule r, not $ all isSpace t ] -- Issue #204: exclude keywords from internal rules -- Issue #70: whitespace separators should be treated like "", at least in the parser usedCats = Set.fromList [ c | Rule _ _ rhs _ <- rules, Left c <- rhs ] -- literals = used builtin token cats (Integer, String, ...) literals = filter (\ s -> TokenCat s `Set.member` usedCats) $ specialCatsP (symbols,keywords) = partition notIdent reservedWords sig <- runTypeChecker $ buildSignature rules let cf = revs $ CFG { cfgPragmas = pragma , cfgUsedCats = usedCats , cfgLiterals = literals , cfgSymbols = symbols , cfgKeywords = keywords , cfgReversibleCats = [] , cfgRules = rules , cfgSignature = sig } case mapMaybe (checkRule cf) rules of [] -> return () msgs -> throwError $ unlines msgs return cf where notIdent s = null s || not (isAlpha (head s)) || any (not . isIdentRest) s isIdentRest c = isAlphaNum c || c == '_' || c == '\'' revs cf = cf{ cfgReversibleCats = findAllReversibleCats cf } -- | This function goes through each rule of a grammar and replace Cat "X" with -- TokenCat "X" when "X" is a token type. markTokenCategories :: CF -> CF markTokenCategories cf = fixTokenCats tokenCatNames cf where tokenCatNames = [ wpThing n | TokenReg n _ _ <- cfgPragmas cf ] ++ specialCatsP class FixTokenCats a where fixTokenCats :: [TokenCat] -> a -> a default fixTokenCats :: (Functor t, FixTokenCats b, t b ~ a) => [TokenCat] -> a -> a fixTokenCats = fmap . fixTokenCats instance FixTokenCats a => FixTokenCats [a] instance FixTokenCats a => FixTokenCats (WithPosition a) instance (FixTokenCats a, Ord a) => FixTokenCats (Set a) where fixTokenCats = Set.map . fixTokenCats -- | Change the constructor of categories with the given names from Cat to -- TokenCat -- >>> fixTokenCats ["A"] (Cat "A") == TokenCat "A" -- True -- >>> fixTokenCats ["A"] (ListCat (Cat "A")) == ListCat (TokenCat "A") -- True -- >>> fixTokenCats ["A"] (Cat "B") == Cat "B" -- True instance FixTokenCats Cat where fixTokenCats ns = \case Cat a | a `elem` ns -> TokenCat a ListCat c -> ListCat $ fixTokenCats ns c c -> c instance FixTokenCats (Either Cat String) where fixTokenCats = left . fixTokenCats instance FixTokenCats (Rul f) where fixTokenCats ns (Rule f c rhs internal) = Rule f (fixTokenCats ns c) (fixTokenCats ns rhs) internal instance FixTokenCats Pragma where fixTokenCats ns = \case EntryPoints eps -> EntryPoints $ fixTokenCats ns eps p -> p instance FixTokenCats (CFG f) where fixTokenCats ns cf@CFG{..} = cf { cfgPragmas = fixTokenCats ns cfgPragmas , cfgUsedCats = fixTokenCats ns cfgUsedCats , cfgRules = fixTokenCats ns cfgRules } -- | Translation monad. newtype Trans a = Trans { unTrans :: ReaderT SharedOptions Err a } deriving (Functor, Applicative, Monad, MonadReader SharedOptions, MonadError String) runTrans :: Trans a -> SharedOptions -> Err a runTrans m opts = unTrans m `runReaderT` opts transDef :: Abs.Def -> Trans [Either Pragma Rule] transDef = \case Abs.Rule label cat items -> do f <- transLabel label c <- transCat cat return $ [ Right $ Rule f c (concatMap transItem items) Parsable ] Abs.Internal label cat items -> do f <- transLabel label c <- transCat cat return $ [ Right $ Rule f c (concatMap transItem items) Internal ] Abs.Comment str -> return [ Left $ CommentS str ] Abs.Comments str1 str2 -> return [ Left $ CommentM (str1, str2) ] Abs.Token ident reg -> do x <- transIdent ident; return [Left $ TokenReg x False $ simpReg reg] Abs.PosToken ident reg -> do x <- transIdent ident; return [Left $ TokenReg x True $ simpReg reg] Abs.Entryp cats -> singleton . Left . EntryPoints <$> mapM transCat cats Abs.Separator size ident str -> map Right <$> separatorRules size ident str Abs.Terminator size ident str -> map Right <$> terminatorRules size ident str Abs.Delimiters cat _ _ _ _ -> do WithPosition pos _ <- transCat cat throwError $ blendInPosition $ WithPosition pos $ "The delimiters pragma " ++ removedIn290 Abs.Coercions ident int -> map Right <$> coercionRules ident int Abs.Rules ident strs -> map Right <$> ebnfRules ident strs Abs.Layout ss -> return [ Left $ Layout $ map (,Delimiters ";" "{" "}") ss ] Abs.LayoutStop ss -> return [ Left $ LayoutStop ss] Abs.LayoutTop -> return [ Left $ LayoutTop ";" ] Abs.Function ident xs e -> do f <- transIdent ident let xs' = map transArg xs return [ Left $ FunDef $ Define f xs' (transExp (map fst xs') e) dummyBase ] -- | Translate @separator [nonempty] C "s"@. -- The position attached to the generated rules is taken from @C@. -- -- (Ideally, we would take them from the @separator@ keyword. -- But BNFC does not deliver position information there.) -- -- If the user-provided separator consists of white space only, -- we turn it into a terminator rule to prevent reduce/reduce conflicts. separatorRules :: Abs.MinimumSize -> Abs.Cat -> String -> Trans [Rule] separatorRules size c0 s | all isSpace s = terminatorRules size c0 s | otherwise = do WithPosition pos c <- transCat c0 let cs = ListCat c let rule :: String -> SentForm -> Rule rule x rhs = Rule (WithPosition pos x) (WithPosition pos cs) rhs Parsable return $ concat [ [ rule "[]" [] | size == Abs.MEmpty ] , [ rule "(:[])" [Left c] ] , [ rule "(:)" [Left c, Right s, Left cs] ] ] -- | Translate @terminator [nonempty] C "s"@. -- The position attached to the generated rules is taken from @C@. -- -- (Ideally, we would take them from the @terminator@ keyword. -- But BNFC does not deliver position information there.) terminatorRules :: Abs.MinimumSize -> Abs.Cat -> String -> Trans [Rule] terminatorRules size c0 s = do WithPosition pos c <- transCat c0 let wp = WithPosition pos let cs = ListCat c let rule x rhs = Rule (wp x) (wp cs) rhs Parsable return [ case size of Abs.MNonempty -> rule "(:[])" (Left c : term []) Abs.MEmpty -> rule "[]" [] , rule "(:)" (Left c : term [Left cs]) ] where term = if null s then id else (Right s :) -- | Expansion of the @coercion@ pragma. coercionRules :: Abs.Identifier -> Integer -> Trans [Rule] coercionRules c0 n = do WithPosition pos c <- transIdent c0 let wp = WithPosition pos let urule x rhs = Rule (wp "_") (wp x) rhs Parsable return $ concat [ [ urule (Cat c) [Left (CoercCat c 1)] ] , [ urule (CoercCat c (i-1)) [Left (CoercCat c i)] | i <- [2..n] ] , [ urule (CoercCat c n) [Right "(", Left (Cat c), Right ")"] ] ] -- | Expansion of the @rules@ pragma. ebnfRules :: Abs.Identifier -> [Abs.RHS] -> Trans [Rule] ebnfRules (Abs.Identifier ((line, col), c)) rhss = do file <- asks lbnfFile let wp = WithPosition $ Position file line col let rule x rhs = Rule (wp x) (wp $ strToCat c) rhs Parsable return [ rule (mkFun k its) (concatMap transItem its) | (k, Abs.RHS its) <- zip [1 :: Int ..] rhss ] where mkFun k = \case [Abs.Terminal s] -> c' ++ "_" ++ mkName k s [Abs.NTerminal n] -> c' ++ identCat (transCat' n) _ -> c' ++ "_" ++ show k c' = c --- normCat c mkName k s = if all (\c -> isAlphaNum c || elem c ("_'" :: String)) s then s else show k -- | Translate a rule item (terminal or non terminal) -- It also sanitizes the terminals a bit by skipping empty terminals -- or splitting multiwords terminals. -- This means that the following rule -- -- > Foo. S ::= "foo bar" "" -- -- is equivalent to -- -- > Foo. S ::= "foo" "bar" transItem :: Abs.Item -> [Either Cat String] transItem (Abs.Terminal str) = [ Right w | w <- words str ] transItem (Abs.NTerminal cat) = [ Left (transCat' cat) ] transCat' :: Abs.Cat -> Cat transCat' = \case Abs.ListCat cat -> ListCat $ transCat' cat Abs.IdCat (Abs.Identifier (_pos, c)) -> strToCat c transCat :: Abs.Cat -> Trans (WithPosition Cat) transCat = \case Abs.ListCat cat -> fmap ListCat <$> transCat cat Abs.IdCat (Abs.Identifier ((line, col), c)) -> do file <- asks lbnfFile return $ WithPosition (Position file line col) $ strToCat c transLabel :: Abs.Label -> Trans RFun transLabel = \case Abs.Id id -> transIdent id Abs.Wild -> return $ noPosition $ "_" Abs.ListE -> return $ noPosition $ "[]" Abs.ListCons -> return $ noPosition $ "(:)" Abs.ListOne -> return $ noPosition $ "(:[])" transIdent :: Abs.Identifier -> Trans RString transIdent (Abs.Identifier ((line, col), str)) = do file <- asks lbnfFile return $ WithPosition (Position file line col) str transArg :: Abs.Arg -> (String, Base) transArg (Abs.Arg (Abs.Identifier (_pos, x))) = (x, dummyBase) transExp :: [String] -- ^ Arguments of definition (in scope in expression). -> Abs.Exp -- ^ Expression. -> Exp -- ^ Translated expression. transExp xs = loop where loop = \case Abs.App x es -> App (transIdent' x) dummyType (map loop es) Abs.Var x -> let x' = transIdent' x in if x' `elem` xs then Var x' else App x' dummyType [] Abs.Cons e1 e2 -> cons e1 (loop e2) Abs.List es -> foldr cons nil es Abs.LitInt x -> LitInt x Abs.LitDouble x -> LitDouble x Abs.LitChar x -> LitChar x Abs.LitString x -> LitString x cons e1 e2 = App "(:)" dummyType [loop e1, e2] nil = App "[]" dummyType [] transIdent' (Abs.Identifier (_pos, x)) = x -------------------------------------------------------------------------------- -- | Check if any comment delimiter is null. checkComments :: CFG f -> [String] -- ^ List of errors. checkComments cf = concat [ [ "Empty line comment delimiter." | CommentS "" <- prags ] , [ "Empty block comment start delimiter." | CommentM ("", _) <- prags ] , [ "Empty block comment end delimiter." | CommentM (_, "") <- prags ] ] where prags = cfgPragmas cf -- | Check if any of the user-defined terminal categories is nullable or empty. checkTokens :: CFG f -> Maybe String checkTokens cf = case catMaybes [ checkTokensEmpty cf, checkTokensNullable cf ] of [] -> Nothing ss -> Just $ concat ss -- | Check if any of the user-defined terminal categories is nullable. checkTokensNullable :: CFG f -> Maybe String checkTokensNullable cf | null pxs = Nothing | otherwise = Just $ unlines $ concat [ [ "ERROR: The following tokens accept the empty string:" ] , printNames pxs ] where pxs = [ px | TokenReg px _ regex <- cfgPragmas cf, nullable regex ] -- | Check if any of the user-defined terminal categories is nullable. checkTokensEmpty :: CFG f -> Maybe String checkTokensEmpty cf | null pxs = Nothing | otherwise = Just $ unlines $ concat [ [ "ERROR: The following tokens accept nothing:" ] , printNames pxs ] where -- The regular expression is already simplified, so we match against 0 directly. pxs = [ px | TokenReg px _ (RAlts "") <- cfgPragmas cf ] -- we should actually check that -- (1) coercions are always between variants -- (2) no other digits are used checkRule :: CF -> Rule -> Maybe String checkRule cf r@(Rule f (WithPosition _ cat) rhs _) | Cat ('@':_) <- cat = Nothing -- Generated by a pragma; it's a trusted category | badCoercion = stdFail txtCoercion "Bad coercion in rule" | badNil = stdFail txtNil "Bad empty list rule" | badOne = stdFail txtOne "Bad one-element list rule" | badCons = stdFail txtCons "Bad list construction rule" | badList = stdFail txtList "Bad list formation rule" | badSpecial = failure $ "Bad special category rule" +++ s | badTypeName = failure $ "Bad type name" +++ unwords (map prettyShow badTypes) +++ "in" +++ s | badFunName = failure $ "Bad constructor name" +++ fun +++ "in" +++ s | badMissing = failure $ "no production for" +++ unwords missing ++ ", appearing in rule\n " ++ s | otherwise = Nothing where failure = Just . blendInPosition . (f $>) stdFail txt err = failure $ unlines [ err ++ ":", " " ++ s, txt ] fun = wpThing f s = prettyShow r c = normCat cat -- lhs cat without the coercion number cs = [normCat c | Left c <- rhs] -- rhs cats without the coercion numbers badCoercion = isCoercion f && cs /= [c] -- the single rhs cat needs to match the lhs cat txtCoercion = "In a coercion (label _), category on the left of ::= needs to be the single category on the right." badNil = isNilFun f && not (isList c && null cs) txtNil = "In a nil rule (label []), the category on the left of ::= needs to be a list category [C] and no categories are allowed on the right." badOne = isOneFun f && not (isList c && cs == [catOfList c]) txtOne = "In a singleton rule (label (:[])), the category on the left of ::= needs to be a list category [C], and C must be the sole categories on the right." badCons = isConsFun f && not (isList c && cs == [catOfList c, c]) txtCons = "In a cons rule (label (:)), the category on the left of ::= needs to be a list category [C], and C and [C] (in this order) must be the sole categories on the right." badList = isList c && not (isCoercion f || isNilCons f) txtList = "List categories [C] can only be formed by rules labeled _, [], (:), or (:[])." badSpecial = elem c [ Cat x | x <- specialCatsP] && not (isCoercion f) badMissing = not (null missing) missing = filter (`notElem` defineds) [catToStr c | Left c <- rhs] where defineds = tokenNames cf ++ specialCatsP ++ map (catToStr . valCat) (cfgRules cf) badTypeName = not (null badTypes) badTypes = filter isBadType $ cat : [c | Left c <- rhs] where isBadType (ListCat c) = isBadType c isBadType (CoercCat c _) = isBadCatName c isBadType (Cat s) = isBadCatName s isBadType (TokenCat s) = isBadCatName s isBadCatName s = not $ isUpper (head s) || (head s == '@') badFunName = not (all (\c -> isAlphaNum c || c == '_') (wpThing f) {-isUpper (head f)-} || isCoercion f || isNilCons f) -- | Pre-processor that converts the `rules` macros to regular rules -- by creating unique function names for them. -- >>> :{ -- let rules1 = Abs.Rules (Abs.Identifier ((0, 0), "Foo")) -- [ Abs.RHS [Abs.Terminal "abc"] -- , Abs.RHS [Abs.NTerminal (Abs.IdCat (Abs.Identifier ((0, 0), "A")))] -- , Abs.RHS [Abs.Terminal "foo", Abs.Terminal "bar"] -- , Abs.RHS [Abs.Terminal "++"] -- ] -- in -- let tree = expandRules (Abs.Grammar [rules1]) -- in putStrLn (printTree tree) -- :} -- Foo_abc . Foo ::= "abc"; -- FooA . Foo ::= A; -- Foo1 . Foo ::= "foo" "bar"; -- Foo2 . Foo ::= "++" -- -- Note that if there are two `rules` macro with the same category, the -- generated names should be uniques: -- >>> :{ -- let rules1 = Abs.Rules (Abs.Identifier ((0, 0), "Foo")) -- [ Abs.RHS [Abs.Terminal "foo", Abs.Terminal "bar"] ] -- in -- let rules2 = Abs.Rules (Abs.Identifier ((0, 0), "Foo")) -- [ Abs.RHS [Abs.Terminal "foo", Abs.Terminal "foo"] ] -- in -- let tree = expandRules (Abs.Grammar [rules1, rules2]) -- in putStrLn (printTree tree) -- :} -- Foo1 . Foo ::= "foo" "bar"; -- Foo2 . Foo ::= "foo" "foo" -- -- This is using a State monad to remember the last used index for a category. expandRules :: Abs.Grammar -> Abs.Grammar expandRules (Abs.Grammar defs) = Abs.Grammar . concat $ mapM expand defs `evalState` [] where expand :: Abs.Def -> State [(String, Int)] [Abs.Def] expand = \case Abs.Rules ident rhss -> mapM (mkRule ident) rhss other -> return [ other ] mkRule :: Abs.Identifier -> Abs.RHS -> State [(String, Int)] Abs.Def mkRule ident (Abs.RHS rhs) = do fun <- Abs.Id <$> mkName ident rhs return $ Abs.Rule fun (Abs.IdCat ident) rhs mkName :: Abs.Identifier -> [Abs.Item] -> State [(String, Int)] Abs.Identifier mkName (Abs.Identifier (pos, cat)) = \case -- A string that is a valid identifier. [ Abs.Terminal s ] | all (\ c -> isAlphaNum c || c == '_') s -> return $ Abs.Identifier (pos, cat ++ "_" ++ s) -- Same but without double quotes. [ Abs.NTerminal (Abs.IdCat (Abs.Identifier (pos', s))) ] -> return $ Abs.Identifier (pos', cat ++ s) -- Something else that does not immediately give a valid rule name. -- Just number! _ -> do i <- maybe 1 (+1) . lookup cat <$> get modify ((cat, i):) return $ Abs.Identifier (pos, cat ++ show i) BNFC-2.9.5/src/BNFC/Lex.hs0000644000000000000000000023507407346545000013026 0ustar0000000000000000{-# OPTIONS_GHC -fno-warn-unused-binds -fno-warn-missing-signatures #-} {-# LANGUAGE CPP,MagicHash #-} {-# LINE 4 "BNFC/Lex.x" #-} {-# OPTIONS -fno-warn-incomplete-patterns #-} {-# OPTIONS_GHC -w #-} {-# LANGUAGE PatternSynonyms #-} module BNFC.Lex where import Prelude import qualified Data.Bits import Data.Char (ord) import Data.Function (on) import Data.Word (Word8) #if __GLASGOW_HASKELL__ >= 603 #include "ghcconfig.h" #elif defined(__GLASGOW_HASKELL__) #include "config.h" #endif #if __GLASGOW_HASKELL__ >= 503 import Data.Array #else import Array #endif #if __GLASGOW_HASKELL__ >= 503 import Data.Array.Base (unsafeAt) import GHC.Exts #else import GlaExts #endif alex_tab_size :: Int alex_tab_size = 8 alex_base :: AlexAddr alex_base = AlexA# "\xf8\xff\xff\xff\x49\x00\x00\x00\xc9\xff\xff\xff\xe0\xff\xff\xff\xc9\x00\x00\x00\x9c\x01\x00\x00\x2d\x00\x00\x00\x1c\x02\x00\x00\x9c\x02\x00\x00\x1c\x03\x00\x00\x9c\x03\x00\x00\x1c\x04\x00\x00\x9c\x04\x00\x00\x0d\x05\x00\x00\x00\x00\x00\x00\x8d\x05\x00\x00\x00\x00\x00\x00\xfe\x05\x00\x00\x00\x00\x00\x00\x6f\x06\x00\x00\x00\x00\x00\x00\x0b\x01\x00\x00\x00\x00\x00\x00\xb0\x06\x00\x00\x00\x00\x00\x00\xf1\x06\x00\x00\xf1\x07\x00\x00\xb1\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x22\x08\x00\x00\x22\x09\x00\x00\xe2\x08\x00\x00\x00\x00\x00\x00\xbb\x09\x00\x00\xbb\x0a\x00\x00\xc1\x09\x00\x00\x7b\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xbc\x0a\x00\x00\xdb\xff\xff\xff\x47\x00\x00\x00\x72\x00\x00\x00\xb2\x0b\x00\x00\x3d\x0b\x00\x00\x32\x0c\x00\x00\xf2\x0c\x00\x00\xb2\x0c\x00\x00\x00\x00\x00\x00\xa8\x0d\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\xdc\xff\xff\xff\xdd\xff\xff\xff\x00\x00\x00\x00\xdf\xff\xff\xff\x81\x0e\x00\x00\xe4\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x7e\x00\x00\x00\xb9\x09\x00\x00\x8c\x00\x00\x00"# alex_table :: AlexAddr alex_table = AlexA# "\x00\x00\x34\x00\x34\x00\x34\x00\x34\x00\x34\x00\x37\x00\x3c\x00\x2b\x00\x32\x00\x05\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x00\x00\x00\x00\x00\x00\x34\x00\x02\x00\x2c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x22\x00\x37\x00\x37\x00\x37\x00\x37\x00\x37\x00\x35\x00\x37\x00\x00\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x38\x00\x37\x00\x00\x00\x37\x00\x00\x00\x37\x00\x00\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x37\x00\x03\x00\x37\x00\x00\x00\x37\x00\x00\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x36\x00\x37\x00\x37\x00\x01\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x34\x00\x34\x00\x34\x00\x34\x00\x34\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x34\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x03\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x2a\x00\x00\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x3d\x00\x00\x00\x00\x00\x00\x00\x2e\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x3f\x00\x33\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x1a\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1b\x00\x0a\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x15\x00\x0f\x00\x0e\x00\x0e\x00\x0e\x00\x0d\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x2f\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x01\x00\x1b\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x1a\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1b\x00\x0a\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x15\x00\x0f\x00\x0e\x00\x0e\x00\x0e\x00\x0d\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x04\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x23\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x1f\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x1a\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x1c\x00\x08\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x09\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x15\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x0a\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x14\x00\x17\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x19\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x20\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x25\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x28\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x2c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x2c\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x3e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x2c\x00\x29\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x2c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x2c\x00\x00\x00\x00\x00\x00\x00\x2c\x00\x00\x00\x2c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x1f\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x21\x00\x20\x00\x09\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x16\x00\x17\x00\x0c\x00\x10\x00\x10\x00\x10\x00\x11\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x30\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3b\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x00\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x24\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x23\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x26\x00\x25\x00\x08\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x18\x00\x19\x00\x0b\x00\x12\x00\x12\x00\x12\x00\x13\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x00\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x00\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x2f\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x31\x00\x30\x00\x04\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x27\x00\x28\x00\x07\x00\x1d\x00\x1d\x00\x1d\x00\x1e\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x3a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x00\x00\x00\x00\x00\x00\x00\x00\x39\x00\x00\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x39\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3a\x00\x2e\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x3a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x2d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# alex_check :: AlexAddr alex_check = AlexA# "\xff\xff\x09\x00\x0a\x00\x0b\x00\x0c\x00\x0d\x00\x3d\x00\x27\x00\x2d\x00\x2d\x00\x2d\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\xff\xff\xff\xff\xff\xff\x20\x00\x3a\x00\x22\x00\xff\xff\xff\xff\xff\xff\xff\xff\x27\x00\x28\x00\x29\x00\x2a\x00\x2b\x00\x2c\x00\x2d\x00\x2e\x00\xff\xff\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\xff\xff\x3d\x00\xff\xff\x3f\x00\xff\xff\x41\x00\x42\x00\x43\x00\x44\x00\x45\x00\x46\x00\x47\x00\x48\x00\x49\x00\x4a\x00\x4b\x00\x4c\x00\x4d\x00\x4e\x00\x4f\x00\x50\x00\x51\x00\x52\x00\x53\x00\x54\x00\x55\x00\x56\x00\x57\x00\x58\x00\x59\x00\x5a\x00\x5b\x00\x27\x00\x5d\x00\xff\xff\x5f\x00\xff\xff\x61\x00\x62\x00\x63\x00\x64\x00\x65\x00\x66\x00\x67\x00\x68\x00\x69\x00\x6a\x00\x6b\x00\x6c\x00\x6d\x00\x6e\x00\x6f\x00\x70\x00\x71\x00\x72\x00\x73\x00\x74\x00\x75\x00\x76\x00\x77\x00\x78\x00\x79\x00\x7a\x00\x7b\x00\x7c\x00\x7d\x00\x2d\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\x09\x00\x0a\x00\x0b\x00\x0c\x00\x0d\x00\xff\xff\xff\xff\xff\xff\x5c\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x66\x00\xff\xff\xff\xff\xff\xff\xff\xff\x20\x00\xff\xff\xff\xff\x6e\x00\xff\xff\xff\xff\xff\xff\x72\x00\xff\xff\x74\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\x2e\x00\xff\xff\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\xff\xff\xff\xff\xff\xff\xc3\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\x7d\x00\xff\xff\xff\xff\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x2d\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x0a\x00\x0b\x00\x0c\x00\x0d\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x12\x00\x13\x00\x14\x00\x15\x00\x16\x00\x17\x00\x18\x00\x19\x00\x1a\x00\x1b\x00\x1c\x00\x1d\x00\x1e\x00\x1f\x00\x20\x00\x21\x00\x22\x00\x23\x00\x24\x00\x25\x00\x26\x00\x27\x00\x28\x00\x29\x00\x2a\x00\x2b\x00\x2c\x00\x2d\x00\x2e\x00\x2f\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x3c\x00\x3d\x00\x3e\x00\x3f\x00\x40\x00\x41\x00\x42\x00\x43\x00\x44\x00\x45\x00\x46\x00\x47\x00\x48\x00\x49\x00\x4a\x00\x4b\x00\x4c\x00\x4d\x00\x4e\x00\x4f\x00\x50\x00\x51\x00\x52\x00\x53\x00\x54\x00\x55\x00\x56\x00\x57\x00\x58\x00\x59\x00\x5a\x00\x5b\x00\x5c\x00\x5d\x00\x5e\x00\x5f\x00\x60\x00\x61\x00\x62\x00\x63\x00\x64\x00\x65\x00\x66\x00\x67\x00\x68\x00\x69\x00\x6a\x00\x6b\x00\x6c\x00\x6d\x00\x6e\x00\x6f\x00\x70\x00\x71\x00\x72\x00\x73\x00\x74\x00\x75\x00\x76\x00\x77\x00\x78\x00\x79\x00\x7a\x00\x7b\x00\x7c\x00\x7d\x00\x7e\x00\x7f\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x0a\x00\x0b\x00\x0c\x00\x0d\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x12\x00\x13\x00\x14\x00\x15\x00\x16\x00\x17\x00\x18\x00\x19\x00\x1a\x00\x1b\x00\x1c\x00\x1d\x00\x1e\x00\x1f\x00\x20\x00\x21\x00\x22\x00\x23\x00\x24\x00\x25\x00\x26\x00\x27\x00\x28\x00\x29\x00\x2a\x00\x2b\x00\x2c\x00\x2d\x00\x2e\x00\x2f\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x3c\x00\x3d\x00\x3e\x00\x3f\x00\x40\x00\x41\x00\x42\x00\x43\x00\x44\x00\x45\x00\x46\x00\x47\x00\x48\x00\x49\x00\x4a\x00\x4b\x00\x4c\x00\x4d\x00\x4e\x00\x4f\x00\x50\x00\x51\x00\x52\x00\x53\x00\x54\x00\x55\x00\x56\x00\x57\x00\x58\x00\x59\x00\x5a\x00\x5b\x00\x5c\x00\x5d\x00\x5e\x00\x5f\x00\x60\x00\x61\x00\x62\x00\x63\x00\x64\x00\x65\x00\x66\x00\x67\x00\x68\x00\x69\x00\x6a\x00\x6b\x00\x6c\x00\x6d\x00\x6e\x00\x6f\x00\x70\x00\x71\x00\x72\x00\x73\x00\x74\x00\x75\x00\x76\x00\x77\x00\x78\x00\x79\x00\x7a\x00\x7b\x00\x7c\x00\x7d\x00\x7e\x00\x7f\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x27\x00\x22\x00\xff\xff\xff\xff\xff\xff\xff\xff\x27\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x5c\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x5c\x00\x65\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x66\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x6e\x00\xff\xff\xff\xff\xff\xff\x72\x00\xff\xff\x74\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x0a\x00\x0b\x00\x0c\x00\x0d\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x12\x00\x13\x00\x14\x00\x15\x00\x16\x00\x17\x00\x18\x00\x19\x00\x1a\x00\x1b\x00\x1c\x00\x1d\x00\x1e\x00\x1f\x00\x20\x00\x21\x00\x22\x00\x23\x00\x24\x00\x25\x00\x26\x00\x27\x00\x28\x00\x29\x00\x2a\x00\x2b\x00\x2c\x00\x2d\x00\x2e\x00\x2f\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x3c\x00\x3d\x00\x3e\x00\x3f\x00\x40\x00\x41\x00\x42\x00\x43\x00\x44\x00\x45\x00\x46\x00\x47\x00\x48\x00\x49\x00\x4a\x00\x4b\x00\x4c\x00\x4d\x00\x4e\x00\x4f\x00\x50\x00\x51\x00\x52\x00\x53\x00\x54\x00\x55\x00\x56\x00\x57\x00\x58\x00\x59\x00\x5a\x00\x5b\x00\x5c\x00\x5d\x00\x5e\x00\x5f\x00\x60\x00\x61\x00\x62\x00\x63\x00\x64\x00\x65\x00\x66\x00\x67\x00\x68\x00\x69\x00\x6a\x00\x6b\x00\x6c\x00\x6d\x00\x6e\x00\x6f\x00\x70\x00\x71\x00\x72\x00\x73\x00\x74\x00\x75\x00\x76\x00\x77\x00\x78\x00\x79\x00\x7a\x00\x7b\x00\x7c\x00\x7d\x00\x7e\x00\x7f\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x0a\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x22\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xff\xff\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x5c\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\xff\xff\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xff\xff\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x0a\x00\x0b\x00\x0c\x00\x0d\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x12\x00\x13\x00\x14\x00\x15\x00\x16\x00\x17\x00\x18\x00\x19\x00\x1a\x00\x1b\x00\x1c\x00\x1d\x00\x1e\x00\x1f\x00\x20\x00\x21\x00\x22\x00\x23\x00\x24\x00\x25\x00\x26\x00\x27\x00\x28\x00\x29\x00\x2a\x00\x2b\x00\x2c\x00\x2d\x00\x2e\x00\x2f\x00\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x3c\x00\x3d\x00\x3e\x00\x3f\x00\x40\x00\x41\x00\x42\x00\x43\x00\x44\x00\x45\x00\x46\x00\x47\x00\x48\x00\x49\x00\x4a\x00\x4b\x00\x4c\x00\x4d\x00\x4e\x00\x4f\x00\x50\x00\x51\x00\x52\x00\x53\x00\x54\x00\x55\x00\x56\x00\x57\x00\x58\x00\x59\x00\x5a\x00\x5b\x00\x5c\x00\x5d\x00\x5e\x00\x5f\x00\x60\x00\x61\x00\x62\x00\x63\x00\x64\x00\x65\x00\x66\x00\x67\x00\x68\x00\x69\x00\x6a\x00\x6b\x00\x6c\x00\x6d\x00\x6e\x00\x6f\x00\x70\x00\x71\x00\x72\x00\x73\x00\x74\x00\x75\x00\x76\x00\x77\x00\x78\x00\x79\x00\x7a\x00\x7b\x00\x7c\x00\x7d\x00\x7e\x00\x7f\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x0a\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x80\x00\x81\x00\x82\x00\x83\x00\x84\x00\x85\x00\x86\x00\x87\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x8c\x00\x8d\x00\x8e\x00\x8f\x00\x90\x00\x91\x00\x92\x00\x93\x00\x94\x00\x95\x00\x96\x00\x97\x00\x98\x00\x99\x00\x9a\x00\x9b\x00\x9c\x00\x9d\x00\x9e\x00\x9f\x00\xa0\x00\xa1\x00\xa2\x00\xa3\x00\xa4\x00\xa5\x00\xa6\x00\xa7\x00\xa8\x00\xa9\x00\xaa\x00\xab\x00\xac\x00\xad\x00\xae\x00\xaf\x00\xb0\x00\xb1\x00\xb2\x00\xb3\x00\xb4\x00\xb5\x00\xb6\x00\xb7\x00\xb8\x00\xb9\x00\xba\x00\xbb\x00\xbc\x00\xbd\x00\xbe\x00\xbf\x00\xc0\x00\xc1\x00\xc2\x00\xc3\x00\xc4\x00\xc5\x00\xc6\x00\xc7\x00\xc8\x00\xc9\x00\xca\x00\xcb\x00\xcc\x00\xcd\x00\xce\x00\xcf\x00\xd0\x00\xd1\x00\xd2\x00\xd3\x00\xd4\x00\xd5\x00\xd6\x00\xd7\x00\xd8\x00\xd9\x00\xda\x00\xdb\x00\xdc\x00\xdd\x00\xde\x00\xdf\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\xe6\x00\xe7\x00\xe8\x00\xe9\x00\xea\x00\xeb\x00\xec\x00\xed\x00\xee\x00\xef\x00\xf0\x00\xf1\x00\xf2\x00\xf3\x00\xf4\x00\xf5\x00\xf6\x00\xf7\x00\xf8\x00\xf9\x00\xfa\x00\xfb\x00\xfc\x00\xfd\x00\xfe\x00\xff\x00\x27\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x41\x00\x42\x00\x43\x00\x44\x00\x45\x00\x46\x00\x47\x00\x48\x00\x49\x00\x4a\x00\x4b\x00\x4c\x00\x4d\x00\x4e\x00\x4f\x00\x50\x00\x51\x00\x52\x00\x53\x00\x54\x00\x55\x00\x56\x00\x57\x00\x58\x00\x59\x00\x5a\x00\xff\xff\xff\xff\xff\xff\xff\xff\x5f\x00\xff\xff\x61\x00\x62\x00\x63\x00\x64\x00\x65\x00\x66\x00\x67\x00\x68\x00\x69\x00\x6a\x00\x6b\x00\x6c\x00\x6d\x00\x6e\x00\x6f\x00\x70\x00\x71\x00\x72\x00\x73\x00\x74\x00\x75\x00\x76\x00\x77\x00\x78\x00\x79\x00\x7a\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x27\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x30\x00\x31\x00\x32\x00\x33\x00\x34\x00\x35\x00\x36\x00\x37\x00\x38\x00\x39\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x41\x00\x42\x00\x43\x00\x44\x00\x45\x00\x46\x00\x47\x00\x48\x00\x49\x00\x4a\x00\x4b\x00\x4c\x00\x4d\x00\x4e\x00\x4f\x00\x50\x00\x51\x00\x52\x00\x53\x00\x54\x00\x55\x00\x56\x00\x57\x00\x58\x00\x59\x00\x5a\x00\xff\xff\xff\xff\xff\xff\xff\xff\x5f\x00\xc3\x00\x61\x00\x62\x00\x63\x00\x64\x00\x65\x00\x66\x00\x67\x00\x68\x00\x69\x00\x6a\x00\x6b\x00\x6c\x00\x6d\x00\x6e\x00\x6f\x00\x70\x00\x71\x00\x72\x00\x73\x00\x74\x00\x75\x00\x76\x00\x77\x00\x78\x00\x79\x00\x7a\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xc3\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"# alex_deflt :: AlexAddr alex_deflt = AlexA# "\xff\xff\x05\x00\xff\xff\xff\xff\xff\xff\x05\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x14\x00\x14\x00\xff\xff\x16\x00\x16\x00\x18\x00\x18\x00\x1c\x00\x1c\x00\x21\x00\x21\x00\x26\x00\x26\x00\x05\x00\x05\x00\x05\x00\x27\x00\x27\x00\x03\x00\x03\x00\x03\x00\x03\x00\x2c\x00\xff\xff\x2c\x00\x2c\x00\x31\x00\x31\x00\xff\xff\xff\xff\xff\xff\x2c\x00\xff\xff\xff\xff\x32\x00\x32\x00\x32\x00\x32\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"# alex_accept = listArray (0 :: Int, 63) [ AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccNone , AlexAccSkip , AlexAccSkip , AlexAccSkip , AlexAcc 10 , AlexAcc 9 , AlexAcc 8 , AlexAcc 7 , AlexAcc 6 , AlexAcc 5 , AlexAcc 4 , AlexAcc 3 , AlexAcc 2 , AlexAcc 1 , AlexAcc 0 ] alex_actions = array (0 :: Int, 11) [ (10,alex_action_3) , (9,alex_action_3) , (8,alex_action_3) , (7,alex_action_3) , (6,alex_action_4) , (5,alex_action_5) , (4,alex_action_6) , (3,alex_action_7) , (2,alex_action_8) , (1,alex_action_9) , (0,alex_action_9) ] {-# LINE 72 "BNFC/Lex.x" #-} -- | Create a token with position. tok :: (String -> Tok) -> (Posn -> String -> Token) tok f p = PT p . f -- | Token without position. data Tok = TK {-# UNPACK #-} !TokSymbol -- ^ Reserved word or symbol. | TL !String -- ^ String literal. | TI !String -- ^ Integer literal. | TV !String -- ^ Identifier. | TD !String -- ^ Float literal. | TC !String -- ^ Character literal. | T_Identifier !String deriving (Eq, Show, Ord) -- | Smart constructor for 'Tok' for the sake of backwards compatibility. pattern TS :: String -> Int -> Tok pattern TS t i = TK (TokSymbol t i) -- | Keyword or symbol tokens have a unique ID. data TokSymbol = TokSymbol { tsText :: String -- ^ Keyword or symbol text. , tsID :: !Int -- ^ Unique ID. } deriving (Show) -- | Keyword/symbol equality is determined by the unique ID. instance Eq TokSymbol where (==) = (==) `on` tsID -- | Keyword/symbol ordering is determined by the unique ID. instance Ord TokSymbol where compare = compare `on` tsID -- | Token with position. data Token = PT Posn Tok | Err Posn deriving (Eq, Show, Ord) -- | Pretty print a position. printPosn :: Posn -> String printPosn (Pn _ l c) = "line " ++ show l ++ ", column " ++ show c -- | Pretty print the position of the first token in the list. tokenPos :: [Token] -> String tokenPos (t:_) = printPosn (tokenPosn t) tokenPos [] = "end of file" -- | Get the position of a token. tokenPosn :: Token -> Posn tokenPosn (PT p _) = p tokenPosn (Err p) = p -- | Get line and column of a token. tokenLineCol :: Token -> (Int, Int) tokenLineCol = posLineCol . tokenPosn -- | Get line and column of a position. posLineCol :: Posn -> (Int, Int) posLineCol (Pn _ l c) = (l,c) -- | Convert a token into "position token" form. mkPosToken :: Token -> ((Int, Int), String) mkPosToken t = (tokenLineCol t, tokenText t) -- | Convert a token to its text. tokenText :: Token -> String tokenText t = case t of PT _ (TS s _) -> s PT _ (TL s) -> show s PT _ (TI s) -> s PT _ (TV s) -> s PT _ (TD s) -> s PT _ (TC s) -> s Err _ -> "#error" PT _ (T_Identifier s) -> s -- | Convert a token to a string. prToken :: Token -> String prToken t = tokenText t -- | Finite map from text to token organized as binary search tree. data BTree = N -- ^ Nil (leaf). | B String Tok BTree BTree -- ^ Binary node. deriving (Show) -- | Convert potential keyword into token or use fallback conversion. eitherResIdent :: (String -> Tok) -> String -> Tok eitherResIdent tv s = treeFind resWords where treeFind N = tv s treeFind (B a t left right) = case compare s a of LT -> treeFind left GT -> treeFind right EQ -> t -- | The keywords and symbols of the language organized as binary search tree. resWords :: BTree resWords = b "delimiters" 20 (b ";" 10 (b "," 5 (b "*" 3 (b ")" 2 (b "(" 1 N N) N) (b "+" 4 N N)) (b ":" 8 (b "." 7 (b "-" 6 N N) N) (b "::=" 9 N N))) (b "_" 15 (b "[" 13 (b "?" 12 (b "=" 11 N N) N) (b "]" 14 N N)) (b "comment" 18 (b "coercions" 17 (b "char" 16 N N) N) (b "define" 19 N N)))) (b "rules" 30 (b "layout" 25 (b "eps" 23 (b "entrypoints" 22 (b "digit" 21 N N) N) (b "internal" 24 N N)) (b "nonempty" 28 (b "lower" 27 (b "letter" 26 N N) N) (b "position" 29 N N))) (b "toplevel" 35 (b "terminator" 33 (b "stop" 32 (b "separator" 31 N N) N) (b "token" 34 N N)) (b "|" 38 (b "{" 37 (b "upper" 36 N N) N) (b "}" 39 N N)))) where b s n = B bs (TS bs n) where bs = s -- | Unquote string literal. unescapeInitTail :: String -> String unescapeInitTail = id . unesc . tail . id where unesc s = case s of '\\':c:cs | elem c ['\"', '\\', '\''] -> c : unesc cs '\\':'n':cs -> '\n' : unesc cs '\\':'t':cs -> '\t' : unesc cs '\\':'r':cs -> '\r' : unesc cs '\\':'f':cs -> '\f' : unesc cs '"':[] -> [] c:cs -> c : unesc cs _ -> [] ------------------------------------------------------------------- -- Alex wrapper code. -- A modified "posn" wrapper. ------------------------------------------------------------------- data Posn = Pn !Int !Int !Int deriving (Eq, Show, Ord) alexStartPos :: Posn alexStartPos = Pn 0 1 1 alexMove :: Posn -> Char -> Posn alexMove (Pn a l c) '\t' = Pn (a+1) l (((c+7) `div` 8)*8+1) alexMove (Pn a l c) '\n' = Pn (a+1) (l+1) 1 alexMove (Pn a l c) _ = Pn (a+1) l (c+1) type Byte = Word8 type AlexInput = (Posn, -- current position, Char, -- previous char [Byte], -- pending bytes on the current char String) -- current input string tokens :: String -> [Token] tokens str = go (alexStartPos, '\n', [], str) where go :: AlexInput -> [Token] go inp@(pos, _, _, str) = case alexScan inp 0 of AlexEOF -> [] AlexError (pos, _, _, _) -> [Err pos] AlexSkip inp' len -> go inp' AlexToken inp' len act -> act pos (take len str) : (go inp') alexGetByte :: AlexInput -> Maybe (Byte,AlexInput) alexGetByte (p, c, (b:bs), s) = Just (b, (p, c, bs, s)) alexGetByte (p, _, [], s) = case s of [] -> Nothing (c:s) -> let p' = alexMove p c (b:bs) = utf8Encode c in p' `seq` Just (b, (p', c, bs, s)) alexInputPrevChar :: AlexInput -> Char alexInputPrevChar (p, c, bs, s) = c -- | Encode a Haskell String to a list of Word8 values, in UTF8 format. utf8Encode :: Char -> [Word8] utf8Encode = map fromIntegral . go . ord where go oc | oc <= 0x7f = [oc] | oc <= 0x7ff = [ 0xc0 + (oc `Data.Bits.shiftR` 6) , 0x80 + oc Data.Bits..&. 0x3f ] | oc <= 0xffff = [ 0xe0 + (oc `Data.Bits.shiftR` 12) , 0x80 + ((oc `Data.Bits.shiftR` 6) Data.Bits..&. 0x3f) , 0x80 + oc Data.Bits..&. 0x3f ] | otherwise = [ 0xf0 + (oc `Data.Bits.shiftR` 18) , 0x80 + ((oc `Data.Bits.shiftR` 12) Data.Bits..&. 0x3f) , 0x80 + ((oc `Data.Bits.shiftR` 6) Data.Bits..&. 0x3f) , 0x80 + oc Data.Bits..&. 0x3f ] alex_action_3 = tok (eitherResIdent TV) alex_action_4 = tok (eitherResIdent T_Identifier) alex_action_5 = tok (eitherResIdent TV) alex_action_6 = tok (TL . unescapeInitTail) alex_action_7 = tok TC alex_action_8 = tok TI alex_action_9 = tok TD {-# LINE 1 "templates/GenericTemplate.hs" #-} -- ----------------------------------------------------------------------------- -- ALEX TEMPLATE -- -- This code is in the PUBLIC DOMAIN; you may copy it freely and use -- it for any purpose whatsoever. -- ----------------------------------------------------------------------------- -- INTERNALS and main scanner engine -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define GTE(n,m) (tagToEnum# (n >=# m)) #define EQ(n,m) (tagToEnum# (n ==# m)) #else #define GTE(n,m) (n >=# m) #define EQ(n,m) (n ==# m) #endif data AlexAddr = AlexA# Addr# -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ < 503 uncheckedShiftL# = shiftL# #endif {-# INLINE alexIndexInt16OffAddr #-} alexIndexInt16OffAddr :: AlexAddr -> Int# -> Int# alexIndexInt16OffAddr (AlexA# arr) off = #ifdef WORDS_BIGENDIAN narrow16Int# i where i = word2Int# ((high `uncheckedShiftL#` 8#) `or#` low) high = int2Word# (ord# (indexCharOffAddr# arr (off' +# 1#))) low = int2Word# (ord# (indexCharOffAddr# arr off')) off' = off *# 2# #else #if __GLASGOW_HASKELL__ >= 901 int16ToInt# #endif (indexInt16OffAddr# arr off) #endif {-# INLINE alexIndexInt32OffAddr #-} alexIndexInt32OffAddr :: AlexAddr -> Int# -> Int# alexIndexInt32OffAddr (AlexA# arr) off = #ifdef WORDS_BIGENDIAN narrow32Int# i where i = word2Int# ((b3 `uncheckedShiftL#` 24#) `or#` (b2 `uncheckedShiftL#` 16#) `or#` (b1 `uncheckedShiftL#` 8#) `or#` b0) b3 = int2Word# (ord# (indexCharOffAddr# arr (off' +# 3#))) b2 = int2Word# (ord# (indexCharOffAddr# arr (off' +# 2#))) b1 = int2Word# (ord# (indexCharOffAddr# arr (off' +# 1#))) b0 = int2Word# (ord# (indexCharOffAddr# arr off')) off' = off *# 4# #else #if __GLASGOW_HASKELL__ >= 901 int32ToInt# #endif (indexInt32OffAddr# arr off) #endif #if __GLASGOW_HASKELL__ < 503 quickIndex arr i = arr ! i #else -- GHC >= 503, unsafeAt is available from Data.Array.Base. quickIndex = unsafeAt #endif -- ----------------------------------------------------------------------------- -- Main lexing routines data AlexReturn a = AlexEOF | AlexError !AlexInput | AlexSkip !AlexInput !Int | AlexToken !AlexInput !Int a -- alexScan :: AlexInput -> StartCode -> AlexReturn a alexScan input__ (I# (sc)) = alexScanUser undefined input__ (I# (sc)) alexScanUser user__ input__ (I# (sc)) = case alex_scan_tkn user__ input__ 0# input__ sc AlexNone of (AlexNone, input__') -> case alexGetByte input__ of Nothing -> AlexEOF Just _ -> AlexError input__' (AlexLastSkip input__'' len, _) -> AlexSkip input__'' len (AlexLastAcc k input__''' len, _) -> AlexToken input__''' len (alex_actions ! k) -- Push the input through the DFA, remembering the most recent accepting -- state it encountered. alex_scan_tkn user__ orig_input len input__ s last_acc = input__ `seq` -- strict in the input let new_acc = (check_accs (alex_accept `quickIndex` (I# (s)))) in new_acc `seq` case alexGetByte input__ of Nothing -> (new_acc, input__) Just (c, new_input) -> case fromIntegral c of { (I# (ord_c)) -> let base = alexIndexInt32OffAddr alex_base s offset = (base +# ord_c) check = alexIndexInt16OffAddr alex_check offset new_s = if GTE(offset,0#) && EQ(check,ord_c) then alexIndexInt16OffAddr alex_table offset else alexIndexInt16OffAddr alex_deflt s in case new_s of -1# -> (new_acc, input__) -- on an error, we want to keep the input *before* the -- character that failed, not after. _ -> alex_scan_tkn user__ orig_input (if c < 0x80 || c >= 0xC0 then (len +# 1#) else len) -- note that the length is increased ONLY if this is the 1st byte in a char encoding) new_input new_s new_acc } where check_accs (AlexAccNone) = last_acc check_accs (AlexAcc a ) = AlexLastAcc a input__ (I# (len)) check_accs (AlexAccSkip) = AlexLastSkip input__ (I# (len)) data AlexLastAcc = AlexNone | AlexLastAcc !Int !AlexInput !Int | AlexLastSkip !AlexInput !Int data AlexAcc user = AlexAccNone | AlexAcc Int | AlexAccSkip BNFC-2.9.5/src/BNFC/Lex.x0000644000000000000000000001757407346545000012666 0ustar0000000000000000-- -*- haskell -*- File generated by the BNF Converter (bnfc 2.9.3). -- Lexer definition for use with Alex 3 { {-# OPTIONS -fno-warn-incomplete-patterns #-} {-# OPTIONS_GHC -w #-} {-# LANGUAGE PatternSynonyms #-} module BNFC.Lex where import Prelude import qualified Data.Bits import Data.Char (ord) import Data.Function (on) import Data.Word (Word8) } -- Predefined character classes $c = [A-Z\192-\221] # [\215] -- capital isolatin1 letter (215 = \times) FIXME $s = [a-z\222-\255] # [\247] -- small isolatin1 letter (247 = \div ) FIXME $l = [$c $s] -- letter $d = [0-9] -- digit $i = [$l $d _ '] -- identifier character $u = [. \n] -- universal: any character -- Symbols and non-identifier-like reserved words @rsyms = \; | \. | \: \: \= | \[ | \] | \, | \_ | \( | \: | \) | \= | \| | \- | \* | \+ | \? | \{ | \} :- -- Line comment "--" "--" [.]* ; -- Block comment "{-" "-}" \{ \- [$u # \-]* \- ([$u # [\- \}]] [$u # \-]* \- | \-)* \} ; -- Whitespace (skipped) $white+ ; -- Symbols @rsyms { tok (eitherResIdent TV) } -- token Identifier $l (\_ | ($d | $l)) * { tok (eitherResIdent T_Identifier) } -- Keywords and Ident $l $i* { tok (eitherResIdent TV) } -- String \" ([$u # [\" \\ \n]] | (\\ (\" | \\ | \' | n | t | r | f)))* \" { tok (TL . unescapeInitTail) } -- Char \' ($u # [\' \\] | \\ [\\ \' n t r f]) \' { tok TC } -- Integer $d+ { tok TI } -- Double $d+ \. $d+ (e (\-)? $d+)? { tok TD } { -- | Create a token with position. tok :: (String -> Tok) -> (Posn -> String -> Token) tok f p = PT p . f -- | Token without position. data Tok = TK {-# UNPACK #-} !TokSymbol -- ^ Reserved word or symbol. | TL !String -- ^ String literal. | TI !String -- ^ Integer literal. | TV !String -- ^ Identifier. | TD !String -- ^ Float literal. | TC !String -- ^ Character literal. | T_Identifier !String deriving (Eq, Show, Ord) -- | Smart constructor for 'Tok' for the sake of backwards compatibility. pattern TS :: String -> Int -> Tok pattern TS t i = TK (TokSymbol t i) -- | Keyword or symbol tokens have a unique ID. data TokSymbol = TokSymbol { tsText :: String -- ^ Keyword or symbol text. , tsID :: !Int -- ^ Unique ID. } deriving (Show) -- | Keyword/symbol equality is determined by the unique ID. instance Eq TokSymbol where (==) = (==) `on` tsID -- | Keyword/symbol ordering is determined by the unique ID. instance Ord TokSymbol where compare = compare `on` tsID -- | Token with position. data Token = PT Posn Tok | Err Posn deriving (Eq, Show, Ord) -- | Pretty print a position. printPosn :: Posn -> String printPosn (Pn _ l c) = "line " ++ show l ++ ", column " ++ show c -- | Pretty print the position of the first token in the list. tokenPos :: [Token] -> String tokenPos (t:_) = printPosn (tokenPosn t) tokenPos [] = "end of file" -- | Get the position of a token. tokenPosn :: Token -> Posn tokenPosn (PT p _) = p tokenPosn (Err p) = p -- | Get line and column of a token. tokenLineCol :: Token -> (Int, Int) tokenLineCol = posLineCol . tokenPosn -- | Get line and column of a position. posLineCol :: Posn -> (Int, Int) posLineCol (Pn _ l c) = (l,c) -- | Convert a token into "position token" form. mkPosToken :: Token -> ((Int, Int), String) mkPosToken t = (tokenLineCol t, tokenText t) -- | Convert a token to its text. tokenText :: Token -> String tokenText t = case t of PT _ (TS s _) -> s PT _ (TL s) -> show s PT _ (TI s) -> s PT _ (TV s) -> s PT _ (TD s) -> s PT _ (TC s) -> s Err _ -> "#error" PT _ (T_Identifier s) -> s -- | Convert a token to a string. prToken :: Token -> String prToken t = tokenText t -- | Finite map from text to token organized as binary search tree. data BTree = N -- ^ Nil (leaf). | B String Tok BTree BTree -- ^ Binary node. deriving (Show) -- | Convert potential keyword into token or use fallback conversion. eitherResIdent :: (String -> Tok) -> String -> Tok eitherResIdent tv s = treeFind resWords where treeFind N = tv s treeFind (B a t left right) = case compare s a of LT -> treeFind left GT -> treeFind right EQ -> t -- | The keywords and symbols of the language organized as binary search tree. resWords :: BTree resWords = b "delimiters" 20 (b ";" 10 (b "," 5 (b "*" 3 (b ")" 2 (b "(" 1 N N) N) (b "+" 4 N N)) (b ":" 8 (b "." 7 (b "-" 6 N N) N) (b "::=" 9 N N))) (b "_" 15 (b "[" 13 (b "?" 12 (b "=" 11 N N) N) (b "]" 14 N N)) (b "comment" 18 (b "coercions" 17 (b "char" 16 N N) N) (b "define" 19 N N)))) (b "rules" 30 (b "layout" 25 (b "eps" 23 (b "entrypoints" 22 (b "digit" 21 N N) N) (b "internal" 24 N N)) (b "nonempty" 28 (b "lower" 27 (b "letter" 26 N N) N) (b "position" 29 N N))) (b "toplevel" 35 (b "terminator" 33 (b "stop" 32 (b "separator" 31 N N) N) (b "token" 34 N N)) (b "|" 38 (b "{" 37 (b "upper" 36 N N) N) (b "}" 39 N N)))) where b s n = B bs (TS bs n) where bs = s -- | Unquote string literal. unescapeInitTail :: String -> String unescapeInitTail = id . unesc . tail . id where unesc s = case s of '\\':c:cs | elem c ['\"', '\\', '\''] -> c : unesc cs '\\':'n':cs -> '\n' : unesc cs '\\':'t':cs -> '\t' : unesc cs '\\':'r':cs -> '\r' : unesc cs '\\':'f':cs -> '\f' : unesc cs '"':[] -> [] c:cs -> c : unesc cs _ -> [] ------------------------------------------------------------------- -- Alex wrapper code. -- A modified "posn" wrapper. ------------------------------------------------------------------- data Posn = Pn !Int !Int !Int deriving (Eq, Show, Ord) alexStartPos :: Posn alexStartPos = Pn 0 1 1 alexMove :: Posn -> Char -> Posn alexMove (Pn a l c) '\t' = Pn (a+1) l (((c+7) `div` 8)*8+1) alexMove (Pn a l c) '\n' = Pn (a+1) (l+1) 1 alexMove (Pn a l c) _ = Pn (a+1) l (c+1) type Byte = Word8 type AlexInput = (Posn, -- current position, Char, -- previous char [Byte], -- pending bytes on the current char String) -- current input string tokens :: String -> [Token] tokens str = go (alexStartPos, '\n', [], str) where go :: AlexInput -> [Token] go inp@(pos, _, _, str) = case alexScan inp 0 of AlexEOF -> [] AlexError (pos, _, _, _) -> [Err pos] AlexSkip inp' len -> go inp' AlexToken inp' len act -> act pos (take len str) : (go inp') alexGetByte :: AlexInput -> Maybe (Byte,AlexInput) alexGetByte (p, c, (b:bs), s) = Just (b, (p, c, bs, s)) alexGetByte (p, _, [], s) = case s of [] -> Nothing (c:s) -> let p' = alexMove p c (b:bs) = utf8Encode c in p' `seq` Just (b, (p', c, bs, s)) alexInputPrevChar :: AlexInput -> Char alexInputPrevChar (p, c, bs, s) = c -- | Encode a Haskell String to a list of Word8 values, in UTF8 format. utf8Encode :: Char -> [Word8] utf8Encode = map fromIntegral . go . ord where go oc | oc <= 0x7f = [oc] | oc <= 0x7ff = [ 0xc0 + (oc `Data.Bits.shiftR` 6) , 0x80 + oc Data.Bits..&. 0x3f ] | oc <= 0xffff = [ 0xe0 + (oc `Data.Bits.shiftR` 12) , 0x80 + ((oc `Data.Bits.shiftR` 6) Data.Bits..&. 0x3f) , 0x80 + oc Data.Bits..&. 0x3f ] | otherwise = [ 0xf0 + (oc `Data.Bits.shiftR` 18) , 0x80 + ((oc `Data.Bits.shiftR` 12) Data.Bits..&. 0x3f) , 0x80 + ((oc `Data.Bits.shiftR` 6) Data.Bits..&. 0x3f) , 0x80 + oc Data.Bits..&. 0x3f ] } BNFC-2.9.5/src/BNFC/Lexing.hs0000644000000000000000000002576007346545000013523 0ustar0000000000000000{-# LANGUAGE PatternGuards #-} module BNFC.Lexing ( mkLexer, LexType(..), mkRegMultilineComment , debugPrint -- to avoid warning about unused definition ) where -- import Control.Arrow ( (&&&) ) import Data.List ( inits, tails ) import BNFC.Abs ( Reg(..) ) import BNFC.Print ( printTree ) -- for debug printing import BNFC.CF import BNFC.Regex ( simpReg ) import BNFC.Utils ( unless ) -- Used in test suite debugPrint :: Reg -> IO () debugPrint = putStrLn . concat . words . printTree -- Abstract lexer data LexType = LexComment | LexToken String | LexSymbols mkLexer :: CF -> [(Reg, LexType)] mkLexer cf = concat -- comments [ [ (mkRegSingleLineComment s, LexComment) | s <- snd (comments cf) ] , [ (mkRegMultilineComment b e, LexComment) | (b,e) <- fst (comments cf) ] -- user tokens , [ (reg, LexToken name) | (name, reg) <- tokenPragmas cf] -- predefined tokens , [ ( regIdent, LexToken "Ident" ) ] -- Symbols , unless (null $ cfgSymbols cf) [ (foldl1 RAlt (map RSeqs (cfgSymbols cf)), LexSymbols ) ] -- Built-ins , [ ( regInteger, LexToken "Integer") , ( regDouble , LexToken "Double" ) , ( regString , LexToken "String" ) , ( regChar , LexToken "Char" ) ] ] (<&>) :: Reg -> Reg -> Reg (<&>) = RSeq (<|>) :: Reg -> Reg -> Reg (<|>) = RAlt -- Bult-in tokens -- the tests make sure that they correspond to what is in the LBNF reference -- | Ident regex -- >>> debugPrint regIdent -- letter(letter|digit|'_'|'\'')* regIdent :: Reg regIdent = RLetter <&> RStar (RLetter <|> RDigit <|> RChar '_' <|> RChar '\'') -- | Integer regex -- >>> debugPrint regInteger -- digit+ regInteger :: Reg regInteger = RPlus RDigit -- | String regex -- >>> debugPrint regString -- '"'(char-["\"\\"]|'\\'["\"\\nt"])*'"' regString :: Reg regString = RChar '"' <&> RStar ((RAny `RMinus` RAlts "\"\\") <|> (RChar '\\' <&> RAlts "\"\\nt")) <&> RChar '"' -- | Char regex -- >>> debugPrint regChar -- '\''(char-["'\\"]|'\\'["'\\nt"])'\'' regChar :: Reg regChar = RChar '\'' <&> (RMinus RAny (RAlts "'\\") <|> (RChar '\\' <&> RAlts "'\\nt")) <&> RChar '\'' -- | Double regex -- >>> debugPrint regDouble -- digit+'.'digit+('e''-'?digit+)? regDouble :: Reg regDouble = RPlus RDigit <&> RChar '.' <&> RPlus RDigit <&> ROpt (RChar 'e' <&> ROpt (RChar '-') <&> RPlus RDigit) -- | Create regex for single line comments -- >>> debugPrint $ mkRegSingleLineComment "--" -- {"--"}char*'\n' mkRegSingleLineComment :: String -> Reg mkRegSingleLineComment s = RSeqs s <&> RStar RAny <&> RChar '\n' -- -- | Create regex for multiline comments. -- -- -- -- >>> debugPrint $ mkRegMultilineComment "<" ">" -- -- '<'(char-'>')*'>' -- -- -- -- >>> debugPrint $ mkRegMultilineComment "" -- -- {"" -- {"". -- In the general but unlikely case, a comment terminator may have -- non-trivial internal repetitions, like in "ananas". While lexing -- "anananas", we need, after having seen "anana", fall back to state -- "ana", to correctly handle the rest "nas" of the input and recognize the -- comment terminator. -- See the Knuth-Morris-Pratt algorithm of complexity O(n+m) to recognize a -- keyword of length m in a text of length n. -- (Dragon book second edition section 3.4.5; -- Knuth/Morris/Pratt (J. Computing 1977), -- "Fast pattern matching on strings"). -- The basic idea is to construct the regular expression to recognize -- a text not containing @end@ but ending in @end@ from this DFA: -- -- * DFA-states: the prefixes of @end@, formally @inits end@, -- written a(1..i) for @i <= length end@. -- -- * Primary transitions ("spine") take us from state a(1..i) (called @ys@) -- to a(1..i+1) (called @x:ys@), consuming character a(i+1) (called @x@). -- -- * Fallback transitions take us from state a(1..i) (@ys@) to some previous -- state a(1..j) with j <= i, consuming character @z@=a(j) (unless j=0). -- The main condition for fallbacks is a(i-j+2..i)=a(1..j-1) ("suffix = prefix"), -- because then we can append a(j) to our truncated history a(i-j+2..i) -- and end up in a(1..j). -- The secondary condition is that we are obliged to not fall back further -- than we must: If consuming @z@ can get us to a(1..k) with k > j, -- we cannot fall back to a(1..j). -- -- The final @Reg@ transitions along the spine also involve "idling" on a state, -- meaning transition sequences bringing us back to the same state. -- The list @steps@ will contain the "spine" transitions (a(1..i)->a(1..i+1)) -- _including_ the idling. The first entry in the list is the last transition -- computed so far. @allSteps@ is then the complete @steps@ list, which can be -- joined by @RSeq@ (function @joinSteps@). -- -- Remark: -- Note that the generated regex can be too big for lexers to handle. -- For the example @end == "ananas"@, ocamllex uses up ~30.000 of its -- 32.767 maximal automaton transitions, which prevents comments -- ending in "ananas" to be part of a ocamllex lexer definition in practice. -- The Haskell lexer generator Alex is slow as well on this example, -- although the produced lexer is unproblematic in the end. -- -- Lexer generators _should_ be able to handle the regex we are producing here -- because the DFA has only O(n) states and O(n²) transitions where @n = length end@ -- is the length of the comment terminator @end@. -- -- It is just an awkward way to generate this DFA via the detour over a regex -- which in turn is dictated by the interface of lexer generators. -- The size of the regex tree seems to be O(n³)!? -- It would be much smaller as DAG (tree with sharing). -- Lexer generators often support regex definitions; we could make each entry -- in @steps@ a defined regex. However, it is not clear whether this sharing -- is utilized in the regex → NFA construction in the lexer generators. joinSteps :: Reg -> [Reg] -> Reg joinSteps = foldr (flip RSeq) -- Transitions of the spine of the automaton, with last transition first in the list. allSteps :: [Reg] allSteps = fst $ foldl next ([],[]) end -- @next (steps, ys) x@ calculates the next step, -- taking us from state @ys@ to state @x:ys@. next :: ([Reg],[Char]) -> Char -> ([Reg],[Char]) next ( steps -- [r(i-1,i), ..., r(0,1)], empty if i == 0 , ys -- [a(i),...,a(1)] , empty if i == 0 ) x -- a(i+1) = (step : steps, x:ys) where -- step = r(i,i+1) is the regular expression to go to next state. -- We can idle on state a(1..i) and then take the transition to a(1..i+1). step :: Reg step = RStar idle `RSeq` RChar x -- @idle@ presents all the possibilities to stay on the current state -- or fall back to a previous state and then again advance to the present state. -- We consider first the possibility to fall back to the start state a(1..0), -- and then the possibility to fall back to a(1..1), then, to a(1..2), etc., -- until staying on a(1..i). -- We are obliged to stay as far advanced as possible, we can only fall -- father back if we cannot stay more upfront. -- Transitioning to state a(1..j) is possible if -- * the next character is not x (a(i+1)), -- * the next character is a(j), -- * the last j-1 characters we processed, a(i-j+2..j) are a(1..j-1), -- * we cannot transition to a(1..j+1), a(1..j+2), ..., a(1..i). idle :: Reg idle = foldl RAlt toStart $ map snd possibilities where -- List of possibilities to go back to a previous state upon -- the given character and how to return to the present state. -- We calculate the possibilities in order of: -- * staying on the current state -- * falling back one state -- * falling back two states -- * ... -- * falling back to the start. -- The reason is that falling back further than necessary is not allowed. possibilities :: [(Char,Reg)] possibilities = foldl addPoss [] (zip3 ys conds $ inits steps) -- Fall back to the beginning and come back to the present state. toStart :: Reg toStart = joinSteps (RAny `RMinus` RAlts (x : map fst possibilities)) steps -- Adding a possiblity on top of the existing ones. addPoss :: [(Char,Reg)] -> (Char,Bool,[Reg]) -> [(Char,Reg)] addPoss poss -- List of possibilities (a(k),r) of falling back to a(k) and recovering to a(i) via r. (z, cond, steps) -- Investigating possibility to fall back to a(1..j) where cond says this is in principle -- possible if we read @z@, not @x@, and none of the previous possibilities. -- @steps@ brings us back to the current state (after falling back). | cond, z `notElem` exclude = (z, joinSteps (RChar z) steps) : poss | otherwise = poss where -- To fall back with @z@, we need to exclude the possibility of -- advancing (via character @x@) and falling back less. exclude :: [Char] exclude = x : map fst poss -- Conditions of whether a fallback is in principle possible, -- starting with the state we have been in previously, ending in the first state. -- If we are in state a(1..i), the possibility of falling back to a(1..j) -- is constrained on a(1..j-1) = a(i-j+2..i). conds :: [Bool] conds = zipWith (==) (tail $ reverse $ inits ys) (tail $ tails ys) BNFC-2.9.5/src/BNFC/License.hs0000644000000000000000000000400607346545000013645 0ustar0000000000000000-- This file was autogenerated from LICENSE, do not edit!! {-# LANGUAGE QuasiQuotes #-} module BNFC.License where import Data.String.QQ license :: String license = [s| Copyright 2002-2020 Andreas Abel, Jonas Almström Duregård, Krasimir Angelov, Jean-Philippe Bernardy, Björn Bringert, Johan Broberg, Kyle Butt, Paul Callaghan, Grégoire Détrez, Markus Forsberg, Ola Frid, Peter Gammie, Thomas Hallgren, Pascal Hof, Simon Huber, Patrik Jansson, Kristofer Johannisson, Antti-Juhani Kaijanaho, Andreas Lööw, Justin Meiners, Kent Mein, Ulf Norell, Gabriele Paganelli, Michael Pellauer, Fabian Ruch, and Aarne Ranta. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |] BNFC-2.9.5/src/BNFC/Options.hs0000644000000000000000000006223107346545000013722 0ustar0000000000000000{-# LANGUAGE CPP #-} {-# LANGUAGE FlexibleInstances#-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TupleSections #-} {-# OPTIONS_GHC -fno-warn-orphans #-} module BNFC.Options ( Mode(..), Target(..), Backend , parseMode, usage, help, versionString , SharedOptions(..) , defaultOptions, isDefault, printOptions , AlexVersion(..), HappyMode(..), OCamlParser(..), JavaLexerParser(..) , RecordPositions(..), TokenText(..) , Ansi(..) , InPackage , removedIn290 , translateOldOptions ) where import qualified Control.Monad as Ctrl import Control.Monad.Writer (WriterT, runWriterT, tell) import Control.Monad.Except (MonadError(..)) import Data.Bifunctor import Data.Either (partitionEithers) import qualified Data.Map as Map -- import qualified Data.List as List import Data.Maybe (fromMaybe, maybeToList) #if !MIN_VERSION_base(4,11,0) import Data.Semigroup (Semigroup(..)) -- for ghc 7.10 - 8.2 #endif import Data.Version (showVersion ) import System.Console.GetOpt import System.FilePath (takeBaseName) import Text.Printf (printf) import Paths_BNFC (version) import BNFC.CF (CF) import BNFC.Utils (unless) -- ~~~ Option data structures ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- | To decouple the option parsing from the execution of the program, -- we introduce a data structure that holds the result of the -- parsing of the arguments. data Mode -- An error has been made by the user -- e.g. invalid argument/combination of arguments = UsageError String -- Basic modes: print some info and exits | Help | License | Version -- Normal mode, specifying the back end to use, -- the option record to be passed to the backend -- and the path of the input grammar file | Target SharedOptions FilePath deriving (Eq,Show,Ord) -- | Target languages data Target = TargetC | TargetCpp | TargetCppNoStl | TargetHaskell | TargetHaskellGadt | TargetLatex | TargetJava | TargetOCaml | TargetPygments | TargetCheck deriving (Eq, Bounded, Enum, Ord) -- | List of Haskell target. haskellTargets :: [Target] haskellTargets = [ TargetHaskell, TargetHaskellGadt ] instance Show Target where show TargetC = "C" show TargetCpp = "C++" show TargetCppNoStl = "C++ (without STL)" show TargetHaskell = "Haskell" show TargetHaskellGadt = "Haskell (with GADT)" show TargetLatex = "Latex" show TargetJava = "Java" show TargetOCaml = "OCaml" show TargetPygments = "Pygments" show TargetCheck = "Check LBNF file" -- | Which version of Alex is targeted? data AlexVersion = Alex3 deriving (Show,Eq,Ord,Bounded,Enum) -- | Happy modes data HappyMode = Standard | GLR deriving (Eq,Show,Bounded,Enum,Ord) -- | Which parser generator for ocaml? data OCamlParser = OCamlYacc | Menhir deriving (Eq,Show,Ord) -- | Which Java backend? data JavaLexerParser = JLexCup | JFlexCup | Antlr4 deriving (Eq,Show,Ord) -- | Line numbers or not? data RecordPositions = RecordPositions | NoRecordPositions deriving (Eq,Show,Ord) -- | Restrict to ANSI standard (C/C++)? data Ansi = Ansi | BeyondAnsi deriving (Eq, Ord, Show) -- | Package name (C++ and Java backends). type InPackage = Maybe String -- | How to represent token content in the Haskell backend? data TokenText = StringToken -- ^ Represent strings as @String@. | ByteStringToken -- ^ Represent strings as @ByteString@. | TextToken -- ^ Represent strings as @Data.Text@. deriving (Eq, Ord, Show) -- | This is the option record that is passed to the different backends. data SharedOptions = Options --- Option shared by at least 2 backends { lbnfFile :: FilePath -- ^ The input file BNFC processes. , lang :: String -- ^ The language we generate: the basename of 'lbnfFile'. , outDir :: FilePath -- ^ Target directory for generated files. , force :: Bool -- ^ Ignore errors as much as possible? , target :: Target -- ^ E.g. @--haskell@. , optMake :: Maybe String -- ^ The name of the Makefile to generate or Nothing for no Makefile. , inPackage :: InPackage -- ^ The hierarchical package to put the modules in, or Nothing. , linenumbers :: RecordPositions -- ^ Add and set line_number field for syntax classes , ansi :: Ansi -- ^ Restrict to the ANSI language standard (C/C++)? --- Haskell specific: , inDir :: Bool -- ^ Option @-d@. , functor :: Bool -- ^ Option @--functor@. Make AST functorial? , generic :: Bool -- ^ Option @--generic@. Derive Data, Generic, Typeable? , alexMode :: AlexVersion -- ^ Options @--alex@. , tokenText :: TokenText -- ^ Options @--bytestrings@, @--string-token@, and @--text-token@. , glr :: HappyMode -- ^ Happy option @--glr@. , xml :: Int -- ^ Options @--xml@, generate DTD and XML printers. , agda :: Bool -- ^ Option @--agda@. Create bindings for Agda? --- OCaml specific , ocamlParser :: OCamlParser -- ^ Option @--menhir@ to switch to @Menhir@. --- Java specific , javaLexerParser :: JavaLexerParser --- C# specific , visualStudio :: Bool -- ^ Generate Visual Studio solution/project files. , wcf :: Bool -- ^ Windows Communication Foundation. } deriving (Eq, Ord, Show) -- We take this opportunity to define the type of the backend functions. type Backend = SharedOptions -- ^ Options -> CF -- ^ Grammar -> IO () defaultOptions :: SharedOptions defaultOptions = Options { lbnfFile = error "lbnfFile not set" , lang = error "lang not set" , outDir = "." , force = False , target = TargetHaskell , optMake = Nothing , inPackage = Nothing , linenumbers = NoRecordPositions , ansi = BeyondAnsi -- Haskell specific , inDir = False , functor = False , generic = False , alexMode = Alex3 , tokenText = StringToken , glr = Standard , xml = 0 , agda = False -- OCaml specific , ocamlParser = OCamlYacc -- Java specific , javaLexerParser = JLexCup -- C# specific , visualStudio = False , wcf = False } -- | Check whether an option is unchanged from the default. isDefault :: (Eq a) => (SharedOptions -> a) -- ^ Option field name. -> SharedOptions -- ^ Options. -> Bool isDefault flag opts = flag opts == flag defaultOptions -- | Return something in case option differs from default. unlessDefault :: (Monoid m, Eq a) => (SharedOptions -> a) -- ^ Option field name. -> SharedOptions -- ^ Options. -> (a -> m) -- ^ Action in case option differs from standard. -> m unlessDefault flag opts f = unless (o == flag defaultOptions) $ f o where o = flag opts -- -- | Return something in case option is unchanged from default. -- whenDefault :: (Monoid m, Eq a) -- => (SharedOptions -> a) -- ^ Option field name. -- -> SharedOptions -- ^ Options. -- -> m -- ^ Action in case option is unchanged from standard. -- -> m -- whenDefault flag opts m = when (o == flag defaultOptions) m -- where o = flag opts -- | Print options as input to BNFC. -- -- @unwords [ "bnfc", printOptions opts ]@ should call bnfc with the same options -- as the current instance. -- printOptions :: SharedOptions -> String printOptions opts = unwords . concat $ [ [ printTargetOption tgt ] -- General and shared options: , unlessDefault outDir opts $ \ o -> [ "--outputdir=" ++ o ] , [ "--makefile=" ++ m | m <- maybeToList $ optMake opts ] , [ "-p " ++ p | p <- maybeToList $ inPackage opts ] , unlessDefault linenumbers opts $ const [ "-l" ] , unlessDefault ansi opts $ const [ "--ansi" ] -- Haskell options: , [ "-d" | inDir opts ] , [ "--functor" | functor opts ] , [ "--generic" | generic opts ] , unlessDefault alexMode opts $ \ o -> [ printAlexOption o ] , [ "--bytestrings" | tokenText opts == ByteStringToken ] , [ "--text-token" | tokenText opts == TextToken, not (agda opts) ] -- default for --agda , [ "--string-token" | tokenText opts == StringToken, agda opts ] -- default unless --agda , [ "--glr" | glr opts == GLR ] , [ "--xml" | xml opts == 1 ] , [ "--xmlt" | xml opts == 2 ] , [ "--agda" | agda opts ] -- C# options: , [ "--vs" | visualStudio opts ] , [ "--wfc" | wcf opts ] -- Java options: , unlessDefault javaLexerParser opts $ \ o -> [ printJavaLexerParserOption o ] -- Java options: , unlessDefault ocamlParser opts $ \ o -> [ printOCamlParserOption o ] -- Grammar file: , [ lbnfFile opts ] ] where tgt = target opts -- haskell = tgt `elem` haskellTargets -- | Print target as an option to BNFC. printTargetOption :: Target -> String printTargetOption = ("--" ++) . \case TargetC -> "c" TargetCpp -> "cpp" TargetCppNoStl -> "cpp-nostl" TargetHaskell -> "haskell" TargetHaskellGadt -> "haskell-gadt" TargetLatex -> "latex" TargetJava -> "java" TargetOCaml -> "ocaml" TargetPygments -> "pygments" TargetCheck -> "check" printAlexOption :: AlexVersion -> String printAlexOption = ("--" ++) . \case Alex3 -> "alex3" printJavaLexerParserOption :: JavaLexerParser -> String printJavaLexerParserOption = ("--" ++) . \case JLexCup -> "jlex" JFlexCup -> "jflex" Antlr4 -> "antlr4" printOCamlParserOption :: OCamlParser -> String printOCamlParserOption = ("--" ++) . \case OCamlYacc -> "yacc" Menhir -> "menhir" -- ~~~ Option definition ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- This defines bnfc's "global" options, like --help globalOptions :: [ OptDescr Mode ] globalOptions = [ Option [] ["help"] (NoArg Help) "show help", Option [] ["license"] (NoArg License) "show license", Option [] ["version","numeric-version"] (NoArg Version) "show version number"] -- | Options for the target languages -- targetOptions :: [ OptDescr Target ] targetOptions :: [ OptDescr (SharedOptions -> SharedOptions)] targetOptions = [ Option "" ["java"] (NoArg (\o -> o {target = TargetJava})) "Output Java code [default: for use with JLex and CUP]" , Option "" ["java-antlr"] (NoArg (\ o -> o{ target = TargetJava, javaLexerParser = Antlr4 })) "Output Java code for use with ANTLR (short for --java --antlr)" , Option "" ["haskell"] (NoArg (\o -> o {target = TargetHaskell})) "Output Haskell code for use with Alex and Happy (default)" , Option "" ["haskell-gadt"] (NoArg (\o -> o {target = TargetHaskellGadt})) "Output Haskell code which uses GADTs" , Option "" ["latex"] (NoArg (\o -> o {target = TargetLatex})) "Output LaTeX code to generate a PDF description of the language" , Option "" ["c"] (NoArg (\o -> o {target = TargetC})) "Output C code for use with FLex and Bison" , Option "" ["cpp"] (NoArg (\o -> o {target = TargetCpp})) "Output C++ code for use with FLex and Bison" , Option "" ["cpp-nostl"] (NoArg (\o -> o {target = TargetCppNoStl})) "Output C++ code (without STL) for use with FLex and Bison" , Option "" ["ocaml"] (NoArg (\o -> o {target = TargetOCaml})) "Output OCaml code for use with ocamllex and ocamlyacc" , Option "" ["ocaml-menhir"] (NoArg (\ o -> o{ target = TargetOCaml, ocamlParser = Menhir })) "Output OCaml code for use with ocamllex and menhir (short for --ocaml --menhir)" , Option "" ["pygments"] (NoArg (\o -> o {target = TargetPygments})) "Output a Python lexer for Pygments" , Option "" ["check"] (NoArg (\ o -> o{target = TargetCheck })) "No output. Just check input LBNF file" ] -- | A list of the options and for each of them, the target language -- they apply to. specificOptions :: [(OptDescr (SharedOptions -> SharedOptions), [Target])] specificOptions = [ ( Option ['l'] ["line-numbers"] (NoArg (\o -> o {linenumbers = RecordPositions})) $ unlines [ "Add and set line_number field for all syntax classes" , "(Note: Java requires cup version 0.11b-2014-06-11 or greater.)" ] , [TargetC, TargetCpp, TargetJava] ) , ( Option [] ["ansi"] (NoArg (\o -> o{ ansi = Ansi })) $ unlines [ "Restrict to ANSI language standard" ] , [TargetCpp] ) -- In the future maybe also: TargetC , ( Option ['p'] ["name-space"] (ReqArg (\n o -> o {inPackage = Just n}) "NAMESPACE") "Prepend NAMESPACE to the package/module name" , [TargetCpp, TargetJava] ++ haskellTargets) -- Java backend: , ( Option [] ["jlex" ] (NoArg (\o -> o {javaLexerParser = JLexCup})) "Lex with JLex, parse with CUP (default)" , [TargetJava] ) , ( Option [] ["jflex" ] (NoArg (\o -> o {javaLexerParser = JFlexCup})) "Lex with JFlex, parse with CUP" , [TargetJava] ) , ( Option [] ["antlr4"] (NoArg (\o -> o {javaLexerParser = Antlr4})) "Lex and parse with antlr4" , [TargetJava] ) -- OCaml backend: , ( Option [] ["yacc" ] (NoArg (\ o -> o { ocamlParser = OCamlYacc })) "Generate parser with ocamlyacc (default)" , [TargetOCaml] ) , ( Option [] ["menhir"] (NoArg (\ o -> o { ocamlParser = Menhir })) "Generate parser with menhir" , [TargetOCaml] ) -- Haskell backends: , ( Option ['d'] [] (NoArg (\o -> o {inDir = True})) "Put Haskell code in modules LANG.* instead of LANG* (recommended)" , haskellTargets ) -- -- Option --alex3 is obsolete since Alex 3 is the only choice now. -- -- Keep this in case there will be a new lexer backend for Haskell. -- , ( Option [] ["alex3"] (NoArg (\o -> o {alexMode = Alex3})) -- "Use Alex 3 as Haskell lexer tool (default)" -- , haskellTargets ) , ( Option [] ["bytestrings"] (NoArg (\o -> o { tokenText = ByteStringToken })) "Use ByteString in Alex lexer [deprecated, use --text-token]" , haskellTargets ) , ( Option [] ["text-token"] (NoArg (\o -> o { tokenText = TextToken })) "Use Text in Alex lexer" -- "Use Text in Alex lexer (default for --agda)" , haskellTargets ) , ( Option [] ["string-token"] (NoArg (\o -> o { tokenText = StringToken })) "Use String in Alex lexer (default)" , haskellTargets ) , ( Option [] ["glr"] (NoArg (\o -> o {glr = GLR})) "Output Happy GLR parser [deprecated]" , haskellTargets ) , ( Option [] ["functor"] (NoArg (\o -> o {functor = True})) "Make the AST a functor and use it to store the position of the nodes" , haskellTargets ) , ( Option [] ["generic"] (NoArg (\o -> o {generic = True})) "Derive Data, Generic, and Typeable instances for AST types" , haskellTargets ) , ( Option [] ["xml"] (NoArg (\o -> o {xml = 1})) "Also generate a DTD and an XML printer" , haskellTargets ) , ( Option [] ["xmlt"] (NoArg (\o -> o {xml = 2})) "DTD and an XML printer, another encoding" , haskellTargets ) -- Agda does not support the GADT syntax , ( Option [] ["agda"] (NoArg (\o -> o { agda = True, tokenText = TextToken })) "Also generate Agda bindings for the abstract syntax" , [TargetHaskell] ) ] -- | The list of specific options for a target. specificOptions' :: Target -> [OptDescr (SharedOptions -> SharedOptions)] specificOptions' t = map fst $ filter (elem t . snd) specificOptions commonOptions :: [OptDescr (SharedOptions -> SharedOptions)] commonOptions = [ Option "m" ["makefile"] (OptArg (setMakefile . fromMaybe "Makefile") "MAKEFILE") "generate Makefile" , Option "o" ["outputdir"] (ReqArg (\n o -> o {outDir = n}) "DIR") "Redirects all generated files into DIR" , Option "" ["force"] (NoArg (\ o -> o { force = True })) "Ignore errors in the grammar (may produce ill-formed output or crash)" ] where setMakefile mf o = o { optMake = Just mf } allOptions :: [OptDescr (SharedOptions -> SharedOptions)] allOptions = targetOptions ++ commonOptions ++ map fst specificOptions -- | All target options and all specific options for a given target. allOptions' :: Target -> [OptDescr (SharedOptions -> SharedOptions)] allOptions' t = targetOptions ++ commonOptions ++ specificOptions' t -- ~~~ Help strings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ versionString :: String versionString = showVersion version title :: [String] title = [ "The BNF Converter, " ++ versionString ++ " (c) 2002-today BNFC development team." , "Free software under the BSD 3-clause license." , "List of recent contributors at https://github.com/BNFC/bnfc/graphs/contributors." , "Report bugs at https://github.com/BNFC/bnfc/issues." , "" ] -- oldContributors :: [String] -- oldContributors = -- [ "(c) Jonas Almström Duregård, Krasimir Angelov, Jean-Philippe Bernardy, Björn Bringert, Johan Broberg, Paul Callaghan, " -- , " Grégoire Détrez, Markus Forsberg, Ola Frid, Peter Gammie, Thomas Hallgren, Patrik Jansson, " -- , " Kristofer Johannisson, Antti-Juhani Kaijanaho, Ulf Norell, " -- , " Michael Pellauer and Aarne Ranta 2002 - 2013." -- ] usage :: String usage = unlines [ "usage: bnfc [--TARGET] [OPTIONS] LANG.cf" , " or: bnfc --[numeric-]version" , " or: bnfc [--license]" , " or: bnfc [--help]" ] help :: String help = unlines $ title ++ [ usage , usageInfo "Global options" globalOptions , usageInfo "Common options" commonOptions , usageInfo "TARGET languages" targetOptions ] ++ map targetUsage helpTargets where helpTargets = [ TargetHaskell, TargetJava, TargetC, TargetCpp ] targetUsage t = usageInfo (printf "Special options for the %s backend" (show t)) (specificOptions' t) -- ~~~ Parsing machinery ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- | Main parsing function parseMode :: [String] -> (Mode, UsageWarnings) parseMode args = case runWriterT $ parseMode' =<< translateOldOptions args of Left err -> (UsageError err, []) Right res -> res type ParseOpt = WriterT UsageWarnings (Either String) type UsageWarnings = [String] instance {-# OVERLAPPING #-} Semigroup (ParseOpt ()) where (<>) = (>>) instance {-# OVERLAPPING #-} Monoid (ParseOpt ()) where mempty = pure (); mappend = (<>) parseMode' :: [String] -> ParseOpt Mode parseMode' [] = return Help parseMode' args = -- First, check for global options like --help or --version case getOpt' Permute globalOptions args of (mode:_,_,_,_) -> return mode -- Then, check for unrecognized options. _ -> do let (_, _, unknown, _) = getOpt' Permute allOptions args processUnknownOptions unknown -- Then, determine target language. case getOpt' Permute targetOptions args of -- ([] ,_,_,_) -> usageError "No target selected" -- --haskell is default target (_:_:_,_,_,_) -> usageError "At most one target is allowed" -- Finally, parse options with known target. (optionUpdates,_,_,_) -> do let tgt = target (options optionUpdates) case getOpt' Permute (allOptions' tgt) args of (_, _, _, e:_) -> usageError e (_, _, [u], _) -> usageError $ unwords $ [ "Backend", show tgt, "does not support option", u ] (_, _, us@(_:_), _) -> usageError $ unwords $ [ "Backend", show tgt, "does not support options" ] ++ us (_, [], _, _) -> usageError "Missing grammar file" (optionsUpdates, [grammarFile], [], []) -> do let opts = (options optionsUpdates) { lbnfFile = grammarFile , lang = takeBaseName grammarFile } warnDeprecatedBackend tgt warnDeprecatedOptions opts return $ Target opts grammarFile (_, _, _, _) -> usageError "Too many arguments" where options optionsUpdates = foldl (.) id optionsUpdates defaultOptions usageError = return . UsageError -- * Deprecation class Maintained a where maintained :: a -> Bool printFeature :: a -> String instance Maintained Target where printFeature = printTargetOption maintained = \case TargetC -> True TargetCpp -> True TargetCppNoStl -> True TargetHaskell -> True TargetHaskellGadt -> True TargetLatex -> True TargetJava -> True TargetOCaml -> True TargetPygments -> True TargetCheck -> True instance Maintained AlexVersion where printFeature = printAlexOption maintained = \case Alex3 -> True instance Maintained HappyMode where printFeature = \case Standard -> undefined GLR -> "--glr" maintained = \case Standard -> True GLR -> False warnDeprecatedBackend :: Maintained a => a -> ParseOpt () warnDeprecatedBackend backend = Ctrl.unless (maintained backend) $ warnDeprecated $ unwords [ "backend", printFeature backend ] warnDeprecated :: String -> ParseOpt () warnDeprecated feature = tell [ unwords [ "Warning:", feature, "is deprecated and no longer maintained." ] -- , "Should it be broken, try an older version of BNFC." ] warnDeprecatedOptions :: SharedOptions -> ParseOpt () warnDeprecatedOptions Options{..} = do warnDeprecatedBackend alexMode warnDeprecatedBackend glr -- * Backward compatibility -- | Produce a warning for former options that are now obsolete. -- Throw an error for properly unknown options. -- -- Note: this only works properly for former options that had no arguments. processUnknownOptions :: [String] -> ParseOpt () processUnknownOptions os = do -- Classify unknown options. let cl = map (\ o -> bimap (bimap (o,) (o,)) (o,) $ classifyUnknownOption o) os let (errs, obsolete) = partitionEithers cl -- Print warnings about obsolete options. case map (\ (o, ObsoleteOption) -> o) obsolete of [] -> pure () os@[_] -> tell [ unwords $ "Warning: ignoring obsolete option:" : os ] os@(_:_) -> tell [ unwords $ "Warning: ignoring obsolete options:" : os ] -- Throw errors. unless (null errs) $ do let (unknown, removed) = partitionEithers errs throwError $ unlines $ concat [ [ "Option error(s):" ] , case map (\ (o, UnknownOption) -> o) unknown of [] -> [] us@[_] -> [ unwords $ "Unrecognized option:" : us ] us@(_:_) -> [ unwords $ "Unrecognized options:" : us ] , map (\ (o, RemovedOption msg) -> concat [ o, ": ", msg ]) removed ] -- | Option has never been known. data UnknownOption = UnknownOption -- | Option is obsolete, print warning and continue. data ObsoleteOption = ObsoleteOption -- | Error: Option has been removed, throw error with given message. newtype RemovedOption = RemovedOption String classifyUnknownOption :: String -> Either (Either UnknownOption RemovedOption) ObsoleteOption classifyUnknownOption = \case "--alex1" -> supportRemovedIn290 $ "Alex version 1" "--alex2" -> supportRemovedIn290 $ "Alex version 2" "--alex3" -> obsolete s@"--sharestrings" -> optionRemovedIn290 s s@"--cnf" -> optionRemovedIn290 s "--csharp" -> supportRemovedIn290 "C#" "--profile" -> supportRemovedIn290 "permutation profiles" _ -> unknown where unknown = Left $ Left UnknownOption obsolete = Right ObsoleteOption removed = Left . Right . RemovedOption supportRemovedIn290 feature = removed $ unwords [ "Support for", feature, removedIn290 ] optionRemovedIn290 o = removed $ unwords [ "Option", o, removedIn290 ] removedIn290 :: String removedIn290 = "has been removed in version 2.9.0." -- | A translation function to maintain backward compatibility -- with the old option syntax. translateOldOptions :: [String] -> ParseOpt [String] translateOldOptions = mapM $ \ o -> do case Map.lookup o translation of Nothing -> return o Just o' -> do tell [ unwords [ "Warning: unrecognized option", o, "treated as if", o', "was provided." ] ] return o' where translation = Map.fromList $ [ ("-agda" , "--agda") , ("-java" , "--java") , ("-java1.5" , "--java") , ("-c" , "--c") , ("-cpp" , "--cpp") , ("-cpp_stl" , "--cpp") , ("-cpp_no_stl" , "--cpp-nostl") , ("-csharp" , "--csharp") , ("-ocaml" , "--ocaml") , ("-haskell" , "--haskell") , ("-prof" , "--profile") , ("-gadt" , "--haskell-gadt") , ("-alex1" , "--alex1") , ("-alex2" , "--alex2") , ("-alex3" , "--alex3") , ("-sharestrings" , "--sharestrings") , ("-bytestrings" , "--bytestrings") , ("-glr" , "--glr") , ("-xml" , "--xml") , ("-xmlt" , "--xmlt") , ("-vs" , "--vs") , ("-wcf" , "--wcf") , ("-generic" , "--generic") , ("--ghc" , "--generic") , ("--deriveGeneric" , "--generic") , ("--deriveDataTypeable" , "--generic") ] BNFC-2.9.5/src/BNFC/Par.hs0000644000000000000000000023432407346545000013015 0ustar0000000000000000{-# OPTIONS_GHC -w #-} {-# LANGUAGE CPP #-} {-# LANGUAGE MagicHash #-} {-# LANGUAGE BangPatterns #-} {-# LANGUAGE TypeSynonymInstances #-} {-# LANGUAGE FlexibleInstances #-} #if __GLASGOW_HASKELL__ >= 710 {-# LANGUAGE PartialTypeSignatures #-} #endif {-# OPTIONS_GHC -fno-warn-incomplete-patterns -fno-warn-overlapping-patterns #-} {-# LANGUAGE PatternSynonyms #-} module BNFC.Par ( happyError , myLexer , pGrammar , pListDef , pDef , pItem , pListItem , pCat , pListCat , pLabel , pArg , pListArg , pSeparation , pListString , pExp , pExp1 , pExp2 , pListExp , pListExp2 , pRHS , pListRHS , pMinimumSize , pReg , pReg1 , pReg2 , pReg3 ) where import Prelude import qualified BNFC.Abs import BNFC.Lex import qualified Data.Array as Happy_Data_Array import qualified Data.Bits as Bits import qualified GHC.Exts as Happy_GHC_Exts import Control.Applicative(Applicative(..)) import Control.Monad (ap) -- parser produced by Happy Version 1.21.0 newtype HappyAbsSyn = HappyAbsSyn HappyAny #if __GLASGOW_HASKELL__ >= 607 type HappyAny = Happy_GHC_Exts.Any #else type HappyAny = forall a . a #endif newtype HappyWrap27 = HappyWrap27 (Char) happyIn27 :: (Char) -> (HappyAbsSyn ) happyIn27 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap27 x) {-# INLINE happyIn27 #-} happyOut27 :: (HappyAbsSyn ) -> HappyWrap27 happyOut27 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut27 #-} newtype HappyWrap28 = HappyWrap28 (Double) happyIn28 :: (Double) -> (HappyAbsSyn ) happyIn28 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap28 x) {-# INLINE happyIn28 #-} happyOut28 :: (HappyAbsSyn ) -> HappyWrap28 happyOut28 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut28 #-} newtype HappyWrap29 = HappyWrap29 (Integer) happyIn29 :: (Integer) -> (HappyAbsSyn ) happyIn29 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap29 x) {-# INLINE happyIn29 #-} happyOut29 :: (HappyAbsSyn ) -> HappyWrap29 happyOut29 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut29 #-} newtype HappyWrap30 = HappyWrap30 (String) happyIn30 :: (String) -> (HappyAbsSyn ) happyIn30 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap30 x) {-# INLINE happyIn30 #-} happyOut30 :: (HappyAbsSyn ) -> HappyWrap30 happyOut30 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut30 #-} newtype HappyWrap31 = HappyWrap31 (BNFC.Abs.Identifier) happyIn31 :: (BNFC.Abs.Identifier) -> (HappyAbsSyn ) happyIn31 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap31 x) {-# INLINE happyIn31 #-} happyOut31 :: (HappyAbsSyn ) -> HappyWrap31 happyOut31 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut31 #-} newtype HappyWrap32 = HappyWrap32 (BNFC.Abs.Grammar) happyIn32 :: (BNFC.Abs.Grammar) -> (HappyAbsSyn ) happyIn32 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap32 x) {-# INLINE happyIn32 #-} happyOut32 :: (HappyAbsSyn ) -> HappyWrap32 happyOut32 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut32 #-} newtype HappyWrap33 = HappyWrap33 ([BNFC.Abs.Def]) happyIn33 :: ([BNFC.Abs.Def]) -> (HappyAbsSyn ) happyIn33 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap33 x) {-# INLINE happyIn33 #-} happyOut33 :: (HappyAbsSyn ) -> HappyWrap33 happyOut33 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut33 #-} newtype HappyWrap34 = HappyWrap34 (BNFC.Abs.Def) happyIn34 :: (BNFC.Abs.Def) -> (HappyAbsSyn ) happyIn34 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap34 x) {-# INLINE happyIn34 #-} happyOut34 :: (HappyAbsSyn ) -> HappyWrap34 happyOut34 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut34 #-} newtype HappyWrap35 = HappyWrap35 (BNFC.Abs.Item) happyIn35 :: (BNFC.Abs.Item) -> (HappyAbsSyn ) happyIn35 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap35 x) {-# INLINE happyIn35 #-} happyOut35 :: (HappyAbsSyn ) -> HappyWrap35 happyOut35 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut35 #-} newtype HappyWrap36 = HappyWrap36 ([BNFC.Abs.Item]) happyIn36 :: ([BNFC.Abs.Item]) -> (HappyAbsSyn ) happyIn36 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap36 x) {-# INLINE happyIn36 #-} happyOut36 :: (HappyAbsSyn ) -> HappyWrap36 happyOut36 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut36 #-} newtype HappyWrap37 = HappyWrap37 (BNFC.Abs.Cat) happyIn37 :: (BNFC.Abs.Cat) -> (HappyAbsSyn ) happyIn37 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap37 x) {-# INLINE happyIn37 #-} happyOut37 :: (HappyAbsSyn ) -> HappyWrap37 happyOut37 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut37 #-} newtype HappyWrap38 = HappyWrap38 ([BNFC.Abs.Cat]) happyIn38 :: ([BNFC.Abs.Cat]) -> (HappyAbsSyn ) happyIn38 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap38 x) {-# INLINE happyIn38 #-} happyOut38 :: (HappyAbsSyn ) -> HappyWrap38 happyOut38 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut38 #-} newtype HappyWrap39 = HappyWrap39 (BNFC.Abs.Label) happyIn39 :: (BNFC.Abs.Label) -> (HappyAbsSyn ) happyIn39 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap39 x) {-# INLINE happyIn39 #-} happyOut39 :: (HappyAbsSyn ) -> HappyWrap39 happyOut39 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut39 #-} newtype HappyWrap40 = HappyWrap40 (BNFC.Abs.Arg) happyIn40 :: (BNFC.Abs.Arg) -> (HappyAbsSyn ) happyIn40 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap40 x) {-# INLINE happyIn40 #-} happyOut40 :: (HappyAbsSyn ) -> HappyWrap40 happyOut40 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut40 #-} newtype HappyWrap41 = HappyWrap41 ([BNFC.Abs.Arg]) happyIn41 :: ([BNFC.Abs.Arg]) -> (HappyAbsSyn ) happyIn41 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap41 x) {-# INLINE happyIn41 #-} happyOut41 :: (HappyAbsSyn ) -> HappyWrap41 happyOut41 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut41 #-} newtype HappyWrap42 = HappyWrap42 (BNFC.Abs.Separation) happyIn42 :: (BNFC.Abs.Separation) -> (HappyAbsSyn ) happyIn42 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap42 x) {-# INLINE happyIn42 #-} happyOut42 :: (HappyAbsSyn ) -> HappyWrap42 happyOut42 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut42 #-} newtype HappyWrap43 = HappyWrap43 ([String]) happyIn43 :: ([String]) -> (HappyAbsSyn ) happyIn43 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap43 x) {-# INLINE happyIn43 #-} happyOut43 :: (HappyAbsSyn ) -> HappyWrap43 happyOut43 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut43 #-} newtype HappyWrap44 = HappyWrap44 (BNFC.Abs.Exp) happyIn44 :: (BNFC.Abs.Exp) -> (HappyAbsSyn ) happyIn44 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap44 x) {-# INLINE happyIn44 #-} happyOut44 :: (HappyAbsSyn ) -> HappyWrap44 happyOut44 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut44 #-} newtype HappyWrap45 = HappyWrap45 (BNFC.Abs.Exp) happyIn45 :: (BNFC.Abs.Exp) -> (HappyAbsSyn ) happyIn45 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap45 x) {-# INLINE happyIn45 #-} happyOut45 :: (HappyAbsSyn ) -> HappyWrap45 happyOut45 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut45 #-} newtype HappyWrap46 = HappyWrap46 (BNFC.Abs.Exp) happyIn46 :: (BNFC.Abs.Exp) -> (HappyAbsSyn ) happyIn46 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap46 x) {-# INLINE happyIn46 #-} happyOut46 :: (HappyAbsSyn ) -> HappyWrap46 happyOut46 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut46 #-} newtype HappyWrap47 = HappyWrap47 ([BNFC.Abs.Exp]) happyIn47 :: ([BNFC.Abs.Exp]) -> (HappyAbsSyn ) happyIn47 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap47 x) {-# INLINE happyIn47 #-} happyOut47 :: (HappyAbsSyn ) -> HappyWrap47 happyOut47 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut47 #-} newtype HappyWrap48 = HappyWrap48 ([BNFC.Abs.Exp]) happyIn48 :: ([BNFC.Abs.Exp]) -> (HappyAbsSyn ) happyIn48 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap48 x) {-# INLINE happyIn48 #-} happyOut48 :: (HappyAbsSyn ) -> HappyWrap48 happyOut48 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut48 #-} newtype HappyWrap49 = HappyWrap49 (BNFC.Abs.RHS) happyIn49 :: (BNFC.Abs.RHS) -> (HappyAbsSyn ) happyIn49 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap49 x) {-# INLINE happyIn49 #-} happyOut49 :: (HappyAbsSyn ) -> HappyWrap49 happyOut49 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut49 #-} newtype HappyWrap50 = HappyWrap50 ([BNFC.Abs.RHS]) happyIn50 :: ([BNFC.Abs.RHS]) -> (HappyAbsSyn ) happyIn50 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap50 x) {-# INLINE happyIn50 #-} happyOut50 :: (HappyAbsSyn ) -> HappyWrap50 happyOut50 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut50 #-} newtype HappyWrap51 = HappyWrap51 (BNFC.Abs.MinimumSize) happyIn51 :: (BNFC.Abs.MinimumSize) -> (HappyAbsSyn ) happyIn51 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap51 x) {-# INLINE happyIn51 #-} happyOut51 :: (HappyAbsSyn ) -> HappyWrap51 happyOut51 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut51 #-} newtype HappyWrap52 = HappyWrap52 (BNFC.Abs.Reg) happyIn52 :: (BNFC.Abs.Reg) -> (HappyAbsSyn ) happyIn52 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap52 x) {-# INLINE happyIn52 #-} happyOut52 :: (HappyAbsSyn ) -> HappyWrap52 happyOut52 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut52 #-} newtype HappyWrap53 = HappyWrap53 (BNFC.Abs.Reg) happyIn53 :: (BNFC.Abs.Reg) -> (HappyAbsSyn ) happyIn53 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap53 x) {-# INLINE happyIn53 #-} happyOut53 :: (HappyAbsSyn ) -> HappyWrap53 happyOut53 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut53 #-} newtype HappyWrap54 = HappyWrap54 (BNFC.Abs.Reg) happyIn54 :: (BNFC.Abs.Reg) -> (HappyAbsSyn ) happyIn54 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap54 x) {-# INLINE happyIn54 #-} happyOut54 :: (HappyAbsSyn ) -> HappyWrap54 happyOut54 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut54 #-} newtype HappyWrap55 = HappyWrap55 (BNFC.Abs.Reg) happyIn55 :: (BNFC.Abs.Reg) -> (HappyAbsSyn ) happyIn55 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap55 x) {-# INLINE happyIn55 #-} happyOut55 :: (HappyAbsSyn ) -> HappyWrap55 happyOut55 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut55 #-} happyInTok :: (Token) -> (HappyAbsSyn ) happyInTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyInTok #-} happyOutTok :: (HappyAbsSyn ) -> (Token) happyOutTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOutTok #-} happyExpList :: HappyAddr happyExpList = HappyA# "\x00\x00\x00\x00\x00\x00\x80\x00\xa9\xd7\xb8\x01\x04\x00\x00\x00\x00\x00\x00\x08\x90\x7a\x8d\x1b\x40\x00\x00\x00\x00\x00\x00\x80\x00\xa8\xd7\xb8\x01\x04\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x60\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x08\x80\x02\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x80\x00\x08\x00\x00\xc0\x07\x00\x00\x00\x00\x00\x00\x08\x80\x00\x00\x00\x7c\x00\x00\x00\x00\x00\x00\x80\x00\x08\x00\x00\xc0\x07\x00\x00\x00\x00\x00\x00\x08\x80\x00\x00\x00\x7c\x00\x00\x00\x00\x00\x00\x80\x00\x08\x00\x00\xc0\x07\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x60\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x48\x28\x03\x4c\x00\x00\x00\x00\x00\x00\x00\x08\x80\x84\x32\xc0\x04\x00\x00\x00\x00\x00\x00\x80\x00\x48\x28\x03\x4c\x00\x00\x00\x00\x00\x00\x00\x08\x80\x84\x32\xc0\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x60\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x48\x28\x03\x4c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x08\x80\x84\x32\xc0\x04\x00\x00\x00\x00\x00\x00\x00\x06\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x48\x28\x03\x4c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x60\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x80\x00\x00\x00\x7c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x80\x00\x00\x00\x7c\x00\x00\x00\x00\x00\x00\x80\x00\x08\x00\x00\xc0\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x80\x00\x00\x00\x7c\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x08\x80\x02\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x02\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\xa9\xd7\xb8\x01\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\xa9\xd7\xb8\x01\x04\x00\x00\x00\x00\x00\x00\x08\x80\x84\x32\xc0\x04\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x08\x80\x00\x00\x00\x7c\x00\x00\x00\x00\x00\x00\x80\x00\x08\x00\x00\xc0\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x48\x28\x03\x4c\x00\x00\x00\x00\x00\x00\x00\x08\x80\x84\x32\xc0\x04\x00\x00\x00\x00\x00\x00\x00\x06\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x80\x84\x32\xc0\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x80\x84\x32\xc0\x04\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa0\x00\x00\x00\x00\x00\x00\x00\x00\x08\x80\x00\x00\x00\x7c\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# {-# NOINLINE happyExpListPerState #-} happyExpListPerState st = token_strs_expected where token_strs = ["error","%dummy","%start_pGrammar","%start_pListDef","%start_pDef","%start_pItem","%start_pListItem","%start_pCat","%start_pListCat","%start_pLabel","%start_pArg","%start_pListArg","%start_pSeparation","%start_pListString","%start_pExp","%start_pExp1","%start_pExp2","%start_pListExp","%start_pListExp2","%start_pRHS","%start_pListRHS","%start_pMinimumSize","%start_pReg","%start_pReg1","%start_pReg2","%start_pReg3","Char","Double","Integer","String","Identifier","Grammar","ListDef","Def","Item","ListItem","Cat","ListCat","Label","Arg","ListArg","Separation","ListString","Exp","Exp1","Exp2","ListExp","ListExp2","RHS","ListRHS","MinimumSize","Reg","Reg1","Reg2","Reg3","'('","')'","'*'","'+'","','","'-'","'.'","':'","'::='","';'","'='","'?'","'['","']'","'_'","'char'","'coercions'","'comment'","'define'","'delimiters'","'digit'","'entrypoints'","'eps'","'internal'","'layout'","'letter'","'lower'","'nonempty'","'position'","'rules'","'separator'","'stop'","'terminator'","'token'","'toplevel'","'upper'","'{'","'|'","'}'","L_charac","L_doubl","L_integ","L_quoted","L_Identifier","%eof"] bit_start = st Prelude.* 100 bit_end = (st Prelude.+ 1) Prelude.* 100 read_bit = readArrayBit happyExpList bits = Prelude.map read_bit [bit_start..bit_end Prelude.- 1] bits_indexed = Prelude.zip bits [0..99] token_strs_expected = Prelude.concatMap f bits_indexed f (Prelude.False, _) = [] f (Prelude.True, nr) = [token_strs Prelude.!! nr] happyActOffsets :: HappyAddr happyActOffsets = HappyA# "\x1e\x00\x1e\x00\x40\x00\xf6\xff\xf6\xff\xf4\xff\xf4\xff\x2b\x00\x18\x00\x18\x00\x2f\x00\xe0\xff\x1d\x00\x1d\x00\x1d\x00\x1d\x00\x1d\x00\xf6\xff\xf6\xff\x4c\x00\x59\x00\x59\x00\x59\x00\x59\x00\x45\x00\x00\x00\x00\x00\x06\x00\x59\x00\x46\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x46\x00\x01\x00\xb6\x00\x11\x00\x59\x00\xdf\xff\x7a\x00\x55\x00\x00\x00\x00\x00\x00\x00\xf6\xff\x00\x00\x00\x00\x61\x00\x74\x00\xf4\xff\x00\x00\x00\x00\x74\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1d\x00\x74\x00\x1d\x00\x1d\x00\x00\x00\x00\x00\x1d\x00\x9e\x00\xb0\x00\x00\x00\x79\x00\x79\x00\x79\x00\x79\x00\xba\x00\xa1\x00\xa1\x00\xac\x00\xac\x00\x00\x00\xb5\x00\xaf\x00\xaf\x00\x00\x00\xaf\x00\xd6\x00\xda\x00\x00\x00\xfb\x00\xe7\x00\xe7\x00\xe7\x00\xe7\x00\xe7\x00\xef\x00\xe9\x00\xfe\x00\x07\x01\xf4\xff\xf4\xff\x2b\x00\x85\x00\x1d\x01\x1b\x01\x36\x01\x36\x01\x30\x01\x31\x01\x67\x01\x1e\x00\x46\x01\x00\x00\x00\x00\x1e\x00\x59\x00\xf4\xff\xf4\xff\x69\x01\x48\x01\x00\x00\x4a\x01\x00\x00\x6f\x01\x00\x00\x4c\x01\x4e\x01\x51\x01\x4f\x01\xf4\xff\xf4\xff\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x52\x01\x1d\x00\x1d\x00\x00\x00\x6a\x01\x79\x01\x00\x00\x70\x01\xf6\xff\x00\x00\x59\x00\x59\x00\xb6\x00\x00\x00\x00\x00\x00\x00\x58\x01\x72\x01\x13\x00\x00\x00\x00\x00\x00\x00\x7b\x01\x59\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x74\x01\x00\x00\x7a\x01\x00\x00\x00\x00\x7c\x01\x5a\x01\xf4\xff\x00\x00\x59\x00\xf6\xff\x5a\x01\x5a\x01\x5e\x01\x00\x00\x00\x00\x00\x00\x00\x00\x5e\x01\x7d\x01\x2f\x00\x1d\x00\xf6\xff\x86\x01\x00\x00\x00\x00\x00\x00\x6d\x01\xf6\xff\x00\x00\x00\x00\x00\x00"# happyGotoOffsets :: HappyAddr happyGotoOffsets = HappyA# "\xf5\x00\x24\x01\x35\x01\x50\x01\x3a\x01\x2a\x01\x55\x01\x5f\x00\x48\x00\x57\x00\x7e\x01\x03\x00\xec\x00\x0f\x01\x23\x01\xaa\x00\x8f\x00\x16\x01\xc9\x00\x73\x01\x5c\x00\x72\x00\x08\x00\x19\x00\x00\x00\x00\x00\x00\x00\x00\x00\x60\x00\x87\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x89\x01\x4b\x00\x00\x00\x00\x00\x4b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x42\x01\x00\x00\x00\x00\x00\x00\x00\x00\x61\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x94\x00\x00\x00\xf1\x00\xb2\x00\x00\x00\x00\x00\x9c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x8b\x01\x8c\x01\x00\x00\xb3\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x8d\x01\x8f\x01\x90\x01\x62\x01\x57\x01\x29\x01\x04\x00\x00\x00\x91\x01\x78\x01\x7f\x01\x92\x01\x00\x00\x00\x00\x2b\x01\x00\x00\x00\x00\x00\x00\x34\x01\x6a\x00\x63\x01\x64\x01\x00\x00\x94\x01\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x96\x01\xd9\x00\x97\x01\x99\x01\x65\x01\x59\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x05\x01\xc7\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xcc\x00\x00\x00\x27\x00\x7f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x4b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x9a\x01\x66\x01\x00\x00\x6f\x00\xe1\x00\x9b\x01\x9c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x84\x01\x0a\x01\x45\x01\x00\x00\x00\x00\x00\x00\x00\x00\x88\x01\x4d\x01\x00\x00\x00\x00\x00\x00"# happyAdjustOffset :: Happy_GHC_Exts.Int# -> Happy_GHC_Exts.Int# happyAdjustOffset off = off happyDefActions :: HappyAddr happyDefActions = HappyA# "\xe1\xff\xe1\xff\x00\x00\x00\x00\xcb\xff\x00\x00\xc7\xff\x00\x00\x00\x00\xbe\xff\xbc\xff\x00\x00\x00\x00\x00\x00\x00\x00\xac\xff\x00\x00\xcb\xff\xcb\xff\xa3\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xe7\xff\x98\xff\x00\x00\x00\x00\x00\x00\x91\xff\x95\xff\x99\xff\x94\xff\x92\xff\x93\xff\x00\x00\x00\x00\x9d\xff\x00\x00\x9f\xff\x00\x00\xa1\xff\x00\x00\xa4\xff\xcd\xff\xc8\xff\xcb\xff\xa7\xff\xcc\xff\xa6\xff\x00\x00\x00\x00\xe4\xff\xe3\xff\x00\x00\xb1\xff\xaf\xff\xb2\xff\xb0\xff\xb3\xff\xa9\xff\x00\x00\x00\x00\xac\xff\xe6\xff\xe5\xff\xb3\xff\xab\xff\xb6\xff\xb4\xff\x00\x00\x00\x00\x00\x00\x00\x00\xb9\xff\x00\x00\x00\x00\x00\x00\x00\x00\xbf\xff\xbe\xff\x00\x00\x00\x00\xc4\xff\x00\x00\x00\x00\x00\x00\xc3\xff\xc6\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc7\xff\x00\x00\x00\x00\x00\x00\x00\x00\xa3\xff\xa3\xff\x00\x00\x00\x00\xe0\xff\xe1\xff\x00\x00\xe2\xff\xde\xff\xe1\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xd0\xff\x00\x00\xce\xff\x00\x00\xd7\xff\x00\x00\xbe\xff\xdc\xff\x00\x00\x00\x00\xc7\xff\xc2\xff\x00\x00\xbd\xff\xbb\xff\xba\xff\x00\x00\x00\x00\xac\xff\xb5\xff\x00\x00\x00\x00\xa8\xff\x00\x00\xcb\xff\xca\xff\x00\x00\x00\x00\x9e\xff\x9c\xff\x9b\xff\x9a\xff\x00\x00\x00\x00\x00\x00\x90\xff\x97\xff\x96\xff\xa2\xff\xa0\xff\xa5\xff\xc9\xff\xad\xff\xae\xff\xaa\xff\xb7\xff\xb8\xff\xc1\xff\x00\x00\xc5\xff\x00\x00\xd3\xff\xdb\xff\x00\x00\x00\x00\x00\x00\xcf\xff\x00\x00\xcb\xff\x00\x00\x00\x00\xd9\xff\xdf\xff\xd5\xff\xd6\xff\xd2\xff\xd8\xff\x00\x00\xbc\xff\x00\x00\xcb\xff\x00\x00\xc0\xff\xdd\xff\xd1\xff\xa3\xff\xcb\xff\xda\xff\xd4\xff"# happyCheck :: HappyAddr happyCheck = HappyA# "\xff\xff\x0d\x00\x01\x00\x0d\x00\x02\x00\x26\x00\x03\x00\x03\x00\x00\x00\x03\x00\x04\x00\x2b\x00\x2d\x00\x03\x00\x0d\x00\x0d\x00\x03\x00\x10\x00\x0c\x00\x10\x00\x10\x00\x02\x00\x15\x00\x06\x00\x17\x00\x00\x00\x10\x00\x1a\x00\x1b\x00\x10\x00\x01\x00\x01\x00\x2c\x00\x2b\x00\x2c\x00\x1b\x00\x1c\x00\x24\x00\x25\x00\x00\x00\x0a\x00\x28\x00\x0d\x00\x0d\x00\x01\x00\x0f\x00\x2d\x00\x11\x00\x12\x00\x13\x00\x14\x00\x2d\x00\x16\x00\x1c\x00\x18\x00\x19\x00\x0d\x00\x26\x00\x0f\x00\x1d\x00\x1e\x00\x1f\x00\x2d\x00\x21\x00\x22\x00\x01\x00\x1b\x00\x1c\x00\x2c\x00\x28\x00\x29\x00\x2a\x00\x2b\x00\x2c\x00\x2c\x00\x00\x00\x04\x00\x0d\x00\x1f\x00\x0f\x00\x21\x00\x11\x00\x12\x00\x13\x00\x14\x00\x0d\x00\x16\x00\x2c\x00\x18\x00\x19\x00\x01\x00\x04\x00\x00\x00\x1d\x00\x1e\x00\x1f\x00\x00\x00\x21\x00\x22\x00\x04\x00\x0d\x00\x0e\x00\x0d\x00\x1c\x00\x1c\x00\x10\x00\x00\x00\x0c\x00\x2c\x00\x28\x00\x15\x00\x00\x00\x17\x00\x2b\x00\x00\x00\x1a\x00\x1b\x00\x19\x00\x1a\x00\x1b\x00\x1c\x00\x19\x00\x1a\x00\x1b\x00\x1c\x00\x24\x00\x25\x00\x00\x00\x06\x00\x28\x00\x2d\x00\x19\x00\x1a\x00\x1b\x00\x1c\x00\x26\x00\x19\x00\x1a\x00\x1b\x00\x1c\x00\x1a\x00\x1b\x00\x1c\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x1a\x00\x1b\x00\x1c\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x2d\x00\x13\x00\x05\x00\x15\x00\x20\x00\x2d\x00\x13\x00\x23\x00\x15\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x13\x00\x2b\x00\x15\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x04\x00\x08\x00\x03\x00\x04\x00\x11\x00\x12\x00\x13\x00\x14\x00\x05\x00\x0d\x00\x0e\x00\x0c\x00\x11\x00\x12\x00\x13\x00\x14\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x03\x00\x04\x00\x2d\x00\x03\x00\x04\x00\x08\x00\x09\x00\x0a\x00\x08\x00\x09\x00\x0a\x00\x2b\x00\x11\x00\x12\x00\x13\x00\x14\x00\x2d\x00\x04\x00\x08\x00\x16\x00\x17\x00\x2c\x00\x16\x00\x17\x00\x03\x00\x04\x00\x0d\x00\x0e\x00\x0e\x00\x08\x00\x09\x00\x0a\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x07\x00\x16\x00\x17\x00\x04\x00\x05\x00\x06\x00\x07\x00\x11\x00\x12\x00\x13\x00\x05\x00\x0c\x00\x11\x00\x12\x00\x13\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x2d\x00\x2c\x00\x11\x00\x12\x00\x13\x00\x03\x00\x04\x00\x11\x00\x12\x00\x13\x00\x08\x00\x09\x00\x0a\x00\x12\x00\x13\x00\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x04\x00\x2b\x00\x06\x00\x07\x00\x16\x00\x04\x00\x04\x00\x04\x00\x0c\x00\x06\x00\x07\x00\x2c\x00\x0a\x00\x0c\x00\x13\x00\x0c\x00\x04\x00\x04\x00\x06\x00\x07\x00\x07\x00\x03\x00\x04\x00\x22\x00\x0c\x00\x0c\x00\x08\x00\x09\x00\x0a\x00\x03\x00\x04\x00\x2c\x00\x03\x00\x04\x00\x08\x00\x09\x00\x0a\x00\x08\x00\x09\x00\x0a\x00\x03\x00\x04\x00\x1c\x00\x03\x00\x04\x00\x08\x00\x09\x00\x0a\x00\x08\x00\x04\x00\x0a\x00\x04\x00\x2c\x00\x04\x00\x2d\x00\x0a\x00\x0b\x00\x0a\x00\x0b\x00\x0a\x00\x0b\x00\x04\x00\x04\x00\x04\x00\x04\x00\x04\x00\x04\x00\x0a\x00\x0a\x00\x0a\x00\x0a\x00\x0a\x00\x0a\x00\x0a\x00\x09\x00\x2d\x00\x2c\x00\x2b\x00\x07\x00\x2b\x00\x0e\x00\x2a\x00\x2c\x00\x02\x00\x2b\x00\x2b\x00\x0e\x00\x27\x00\x0e\x00\x06\x00\x0e\x00\x09\x00\x26\x00\x2b\x00\x09\x00\x0b\x00\x02\x00\x1c\x00\x03\x00\x18\x00\x03\x00\x0f\x00\x03\x00\x03\x00\x18\x00\x04\x00\x03\x00\x0f\x00\x04\x00\x04\x00\x04\x00\x18\x00\x04\x00\x03\x00\x03\x00\x02\x00\xff\xff\x03\x00\x03\x00\x03\x00\x18\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"# happyTable :: HappyAddr happyTable = HappyA# "\x00\x00\x35\x00\x1d\x00\x35\x00\xa8\x00\x94\x00\x4b\x00\x4b\x00\x1a\x00\x96\x00\x97\x00\x36\x00\xff\xff\x4b\x00\x1e\x00\xa9\x00\x4b\x00\x1f\x00\x98\x00\x4c\x00\x78\x00\x9c\x00\x20\x00\x93\x00\x21\x00\x1a\x00\xb0\x00\x22\x00\x23\x00\xa6\x00\x40\x00\x57\x00\x37\x00\x36\x00\x37\x00\x25\x00\x26\x00\x24\x00\x25\x00\x1a\x00\x6f\x00\x1a\x00\x41\x00\x58\x00\x57\x00\x59\x00\xff\xff\x61\x00\x62\x00\x63\x00\x64\x00\xff\xff\x65\x00\x1b\x00\x66\x00\x67\x00\x58\x00\x94\x00\x59\x00\x68\x00\x69\x00\x6a\x00\xff\xff\x6b\x00\x6c\x00\x57\x00\x9f\x00\x26\x00\x37\x00\x1a\x00\x42\x00\x43\x00\x36\x00\x37\x00\x37\x00\x1a\x00\x50\x00\x58\x00\x4f\x00\x59\x00\x50\x00\x61\x00\x62\x00\x63\x00\x64\x00\x53\x00\x65\x00\x37\x00\x66\x00\x67\x00\x1d\x00\x50\x00\x1a\x00\x68\x00\x69\x00\x6a\x00\x1a\x00\x6b\x00\x6c\x00\x54\x00\x51\x00\x52\x00\x1e\x00\x94\x00\x2d\x00\x1f\x00\x1a\x00\x55\x00\x37\x00\x1a\x00\x20\x00\x1a\x00\x21\x00\x36\x00\x1a\x00\x22\x00\x23\x00\x29\x00\x2a\x00\x28\x00\x26\x00\x9a\x00\x2a\x00\x28\x00\x26\x00\x24\x00\x25\x00\x1a\x00\x93\x00\x1a\x00\xff\xff\xb5\x00\x2a\x00\x28\x00\x26\x00\x91\x00\xba\x00\x2a\x00\x28\x00\x26\x00\x27\x00\x28\x00\x26\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x3c\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x3c\x00\x9e\x00\x28\x00\x26\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x3c\x00\xff\xff\x3d\x00\x8b\x00\x3e\x00\x7a\x00\xff\xff\x3d\x00\x7b\x00\x8e\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x43\x00\x3d\x00\x36\x00\x8b\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x43\x00\x50\x00\x8a\x00\x96\x00\x97\x00\x44\x00\x45\x00\x46\x00\x47\x00\x89\x00\x51\x00\x85\x00\x98\x00\x44\x00\x45\x00\x46\x00\x8c\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x43\x00\x2d\x00\x2e\x00\xff\xff\x2d\x00\x2e\x00\x2f\x00\x30\x00\x31\x00\x2f\x00\x30\x00\x31\x00\x36\x00\x44\x00\x45\x00\x46\x00\xa4\x00\xff\xff\x50\x00\x85\x00\x32\x00\x33\x00\x37\x00\x32\x00\xa0\x00\x2d\x00\x2e\x00\x51\x00\xad\x00\x84\x00\x2f\x00\x30\x00\x31\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x43\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x43\x00\x82\x00\x32\x00\xb9\x00\x54\x00\x6f\x00\x70\x00\x6d\x00\x4a\x00\x45\x00\x46\x00\x83\x00\x5f\x00\x8d\x00\x45\x00\x46\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x43\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x43\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x43\x00\xff\xff\x37\x00\xa5\x00\x45\x00\x46\x00\x2d\x00\x2e\x00\xc2\x00\x45\x00\x46\x00\x2f\x00\x30\x00\x31\x00\x49\x00\x46\x00\x38\x00\x39\x00\x3a\x00\x3b\x00\x3c\x00\x54\x00\x36\x00\x6c\x00\x6d\x00\x37\x00\x54\x00\x2e\x00\x54\x00\x5f\x00\x71\x00\x6d\x00\x37\x00\x5b\x00\x7b\x00\x48\x00\x5f\x00\x54\x00\x54\x00\xb6\x00\x6d\x00\x5e\x00\x2d\x00\x2e\x00\x78\x00\x5f\x00\x5f\x00\x2f\x00\x5c\x00\x31\x00\x2d\x00\x2e\x00\x37\x00\x2d\x00\x2e\x00\x2f\x00\x91\x00\x31\x00\x2f\x00\xc1\x00\x31\x00\x2d\x00\x2e\x00\x2d\x00\x2d\x00\x2e\x00\x2f\x00\xc5\x00\x31\x00\x5d\x00\x2e\x00\x31\x00\x2e\x00\x37\x00\x2e\x00\xff\xff\x59\x00\x5a\x00\x59\x00\x7c\x00\x59\x00\xa9\x00\x2e\x00\x2e\x00\x2e\x00\x2e\x00\x2e\x00\x2e\x00\x8f\x00\x7d\x00\xb4\x00\xb3\x00\xaa\x00\xbb\x00\x73\x00\xb3\x00\xff\xff\x37\x00\x36\x00\xb0\x00\x36\x00\xa4\x00\x43\x00\x37\x00\xa3\x00\x36\x00\x36\x00\xa2\x00\x9e\x00\x9d\x00\x93\x00\xc0\x00\xbf\x00\x94\x00\x36\x00\xc5\x00\xbe\x00\xc1\x00\x2d\x00\x99\x00\x2b\x00\x98\x00\x4d\x00\x87\x00\x86\x00\x75\x00\x80\x00\x7f\x00\xc3\x00\x7e\x00\x76\x00\x73\x00\x74\x00\xb1\x00\xae\x00\xac\x00\xab\x00\x00\x00\xbc\x00\xb8\x00\xb7\x00\xc6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyReduceArr = Happy_Data_Array.array (24, 111) [ (24 , happyReduce_24), (25 , happyReduce_25), (26 , happyReduce_26), (27 , happyReduce_27), (28 , happyReduce_28), (29 , happyReduce_29), (30 , happyReduce_30), (31 , happyReduce_31), (32 , happyReduce_32), (33 , happyReduce_33), (34 , happyReduce_34), (35 , happyReduce_35), (36 , happyReduce_36), (37 , happyReduce_37), (38 , happyReduce_38), (39 , happyReduce_39), (40 , happyReduce_40), (41 , happyReduce_41), (42 , happyReduce_42), (43 , happyReduce_43), (44 , happyReduce_44), (45 , happyReduce_45), (46 , happyReduce_46), (47 , happyReduce_47), (48 , happyReduce_48), (49 , happyReduce_49), (50 , happyReduce_50), (51 , happyReduce_51), (52 , happyReduce_52), (53 , happyReduce_53), (54 , happyReduce_54), (55 , happyReduce_55), (56 , happyReduce_56), (57 , happyReduce_57), (58 , happyReduce_58), (59 , happyReduce_59), (60 , happyReduce_60), (61 , happyReduce_61), (62 , happyReduce_62), (63 , happyReduce_63), (64 , happyReduce_64), (65 , happyReduce_65), (66 , happyReduce_66), (67 , happyReduce_67), (68 , happyReduce_68), (69 , happyReduce_69), (70 , happyReduce_70), (71 , happyReduce_71), (72 , happyReduce_72), (73 , happyReduce_73), (74 , happyReduce_74), (75 , happyReduce_75), (76 , happyReduce_76), (77 , happyReduce_77), (78 , happyReduce_78), (79 , happyReduce_79), (80 , happyReduce_80), (81 , happyReduce_81), (82 , happyReduce_82), (83 , happyReduce_83), (84 , happyReduce_84), (85 , happyReduce_85), (86 , happyReduce_86), (87 , happyReduce_87), (88 , happyReduce_88), (89 , happyReduce_89), (90 , happyReduce_90), (91 , happyReduce_91), (92 , happyReduce_92), (93 , happyReduce_93), (94 , happyReduce_94), (95 , happyReduce_95), (96 , happyReduce_96), (97 , happyReduce_97), (98 , happyReduce_98), (99 , happyReduce_99), (100 , happyReduce_100), (101 , happyReduce_101), (102 , happyReduce_102), (103 , happyReduce_103), (104 , happyReduce_104), (105 , happyReduce_105), (106 , happyReduce_106), (107 , happyReduce_107), (108 , happyReduce_108), (109 , happyReduce_109), (110 , happyReduce_110), (111 , happyReduce_111) ] happy_n_terms = 46 :: Prelude.Int happy_n_nonterms = 29 :: Prelude.Int happyReduce_24 = happySpecReduce_1 0# happyReduction_24 happyReduction_24 happy_x_1 = case happyOutTok happy_x_1 of { (PT _ (TC happy_var_1)) -> happyIn27 ((read happy_var_1) :: Char )} happyReduce_25 = happySpecReduce_1 1# happyReduction_25 happyReduction_25 happy_x_1 = case happyOutTok happy_x_1 of { (PT _ (TD happy_var_1)) -> happyIn28 ((read happy_var_1) :: Double )} happyReduce_26 = happySpecReduce_1 2# happyReduction_26 happyReduction_26 happy_x_1 = case happyOutTok happy_x_1 of { (PT _ (TI happy_var_1)) -> happyIn29 ((read happy_var_1) :: Integer )} happyReduce_27 = happySpecReduce_1 3# happyReduction_27 happyReduction_27 happy_x_1 = case happyOutTok happy_x_1 of { (PT _ (TL happy_var_1)) -> happyIn30 (happy_var_1 )} happyReduce_28 = happySpecReduce_1 4# happyReduction_28 happyReduction_28 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> happyIn31 (BNFC.Abs.Identifier (mkPosToken happy_var_1) )} happyReduce_29 = happySpecReduce_1 5# happyReduction_29 happyReduction_29 happy_x_1 = case happyOut33 happy_x_1 of { (HappyWrap33 happy_var_1) -> happyIn32 (BNFC.Abs.Grammar happy_var_1 )} happyReduce_30 = happySpecReduce_0 6# happyReduction_30 happyReduction_30 = happyIn33 ([] ) happyReduce_31 = happySpecReduce_1 6# happyReduction_31 happyReduction_31 happy_x_1 = case happyOut34 happy_x_1 of { (HappyWrap34 happy_var_1) -> happyIn33 ((:[]) happy_var_1 )} happyReduce_32 = happySpecReduce_3 6# happyReduction_32 happyReduction_32 happy_x_3 happy_x_2 happy_x_1 = case happyOut34 happy_x_1 of { (HappyWrap34 happy_var_1) -> case happyOut33 happy_x_3 of { (HappyWrap33 happy_var_3) -> happyIn33 ((:) happy_var_1 happy_var_3 )}} happyReduce_33 = happySpecReduce_2 6# happyReduction_33 happyReduction_33 happy_x_2 happy_x_1 = case happyOut33 happy_x_2 of { (HappyWrap33 happy_var_2) -> happyIn33 (happy_var_2 )} happyReduce_34 = happyReduce 5# 7# happyReduction_34 happyReduction_34 (happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut39 happy_x_1 of { (HappyWrap39 happy_var_1) -> case happyOut37 happy_x_3 of { (HappyWrap37 happy_var_3) -> case happyOut36 happy_x_5 of { (HappyWrap36 happy_var_5) -> happyIn34 (BNFC.Abs.Rule happy_var_1 happy_var_3 happy_var_5 ) `HappyStk` happyRest}}} happyReduce_35 = happySpecReduce_2 7# happyReduction_35 happyReduction_35 happy_x_2 happy_x_1 = case happyOut30 happy_x_2 of { (HappyWrap30 happy_var_2) -> happyIn34 (BNFC.Abs.Comment happy_var_2 )} happyReduce_36 = happySpecReduce_3 7# happyReduction_36 happyReduction_36 happy_x_3 happy_x_2 happy_x_1 = case happyOut30 happy_x_2 of { (HappyWrap30 happy_var_2) -> case happyOut30 happy_x_3 of { (HappyWrap30 happy_var_3) -> happyIn34 (BNFC.Abs.Comments happy_var_2 happy_var_3 )}} happyReduce_37 = happyReduce 6# 7# happyReduction_37 happyReduction_37 (happy_x_6 `HappyStk` happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut39 happy_x_2 of { (HappyWrap39 happy_var_2) -> case happyOut37 happy_x_4 of { (HappyWrap37 happy_var_4) -> case happyOut36 happy_x_6 of { (HappyWrap36 happy_var_6) -> happyIn34 (BNFC.Abs.Internal happy_var_2 happy_var_4 happy_var_6 ) `HappyStk` happyRest}}} happyReduce_38 = happySpecReduce_3 7# happyReduction_38 happyReduction_38 happy_x_3 happy_x_2 happy_x_1 = case happyOut31 happy_x_2 of { (HappyWrap31 happy_var_2) -> case happyOut52 happy_x_3 of { (HappyWrap52 happy_var_3) -> happyIn34 (BNFC.Abs.Token happy_var_2 happy_var_3 )}} happyReduce_39 = happyReduce 4# 7# happyReduction_39 happyReduction_39 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut31 happy_x_3 of { (HappyWrap31 happy_var_3) -> case happyOut52 happy_x_4 of { (HappyWrap52 happy_var_4) -> happyIn34 (BNFC.Abs.PosToken happy_var_3 happy_var_4 ) `HappyStk` happyRest}} happyReduce_40 = happySpecReduce_2 7# happyReduction_40 happyReduction_40 happy_x_2 happy_x_1 = case happyOut38 happy_x_2 of { (HappyWrap38 happy_var_2) -> happyIn34 (BNFC.Abs.Entryp happy_var_2 )} happyReduce_41 = happyReduce 4# 7# happyReduction_41 happyReduction_41 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut51 happy_x_2 of { (HappyWrap51 happy_var_2) -> case happyOut37 happy_x_3 of { (HappyWrap37 happy_var_3) -> case happyOut30 happy_x_4 of { (HappyWrap30 happy_var_4) -> happyIn34 (BNFC.Abs.Separator happy_var_2 happy_var_3 happy_var_4 ) `HappyStk` happyRest}}} happyReduce_42 = happyReduce 4# 7# happyReduction_42 happyReduction_42 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut51 happy_x_2 of { (HappyWrap51 happy_var_2) -> case happyOut37 happy_x_3 of { (HappyWrap37 happy_var_3) -> case happyOut30 happy_x_4 of { (HappyWrap30 happy_var_4) -> happyIn34 (BNFC.Abs.Terminator happy_var_2 happy_var_3 happy_var_4 ) `HappyStk` happyRest}}} happyReduce_43 = happyReduce 6# 7# happyReduction_43 happyReduction_43 (happy_x_6 `HappyStk` happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut37 happy_x_2 of { (HappyWrap37 happy_var_2) -> case happyOut30 happy_x_3 of { (HappyWrap30 happy_var_3) -> case happyOut30 happy_x_4 of { (HappyWrap30 happy_var_4) -> case happyOut42 happy_x_5 of { (HappyWrap42 happy_var_5) -> case happyOut51 happy_x_6 of { (HappyWrap51 happy_var_6) -> happyIn34 (BNFC.Abs.Delimiters happy_var_2 happy_var_3 happy_var_4 happy_var_5 happy_var_6 ) `HappyStk` happyRest}}}}} happyReduce_44 = happySpecReduce_3 7# happyReduction_44 happyReduction_44 happy_x_3 happy_x_2 happy_x_1 = case happyOut31 happy_x_2 of { (HappyWrap31 happy_var_2) -> case happyOut29 happy_x_3 of { (HappyWrap29 happy_var_3) -> happyIn34 (BNFC.Abs.Coercions happy_var_2 happy_var_3 )}} happyReduce_45 = happyReduce 4# 7# happyReduction_45 happyReduction_45 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut31 happy_x_2 of { (HappyWrap31 happy_var_2) -> case happyOut50 happy_x_4 of { (HappyWrap50 happy_var_4) -> happyIn34 (BNFC.Abs.Rules happy_var_2 happy_var_4 ) `HappyStk` happyRest}} happyReduce_46 = happyReduce 5# 7# happyReduction_46 happyReduction_46 (happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut31 happy_x_2 of { (HappyWrap31 happy_var_2) -> case happyOut41 happy_x_3 of { (HappyWrap41 happy_var_3) -> case happyOut44 happy_x_5 of { (HappyWrap44 happy_var_5) -> happyIn34 (BNFC.Abs.Function happy_var_2 happy_var_3 happy_var_5 ) `HappyStk` happyRest}}} happyReduce_47 = happySpecReduce_2 7# happyReduction_47 happyReduction_47 happy_x_2 happy_x_1 = case happyOut43 happy_x_2 of { (HappyWrap43 happy_var_2) -> happyIn34 (BNFC.Abs.Layout happy_var_2 )} happyReduce_48 = happySpecReduce_3 7# happyReduction_48 happyReduction_48 happy_x_3 happy_x_2 happy_x_1 = case happyOut43 happy_x_3 of { (HappyWrap43 happy_var_3) -> happyIn34 (BNFC.Abs.LayoutStop happy_var_3 )} happyReduce_49 = happySpecReduce_2 7# happyReduction_49 happyReduction_49 happy_x_2 happy_x_1 = happyIn34 (BNFC.Abs.LayoutTop ) happyReduce_50 = happySpecReduce_1 8# happyReduction_50 happyReduction_50 happy_x_1 = case happyOut30 happy_x_1 of { (HappyWrap30 happy_var_1) -> happyIn35 (BNFC.Abs.Terminal happy_var_1 )} happyReduce_51 = happySpecReduce_1 8# happyReduction_51 happyReduction_51 happy_x_1 = case happyOut37 happy_x_1 of { (HappyWrap37 happy_var_1) -> happyIn35 (BNFC.Abs.NTerminal happy_var_1 )} happyReduce_52 = happySpecReduce_0 9# happyReduction_52 happyReduction_52 = happyIn36 ([] ) happyReduce_53 = happySpecReduce_2 9# happyReduction_53 happyReduction_53 happy_x_2 happy_x_1 = case happyOut35 happy_x_1 of { (HappyWrap35 happy_var_1) -> case happyOut36 happy_x_2 of { (HappyWrap36 happy_var_2) -> happyIn36 ((:) happy_var_1 happy_var_2 )}} happyReduce_54 = happySpecReduce_3 10# happyReduction_54 happyReduction_54 happy_x_3 happy_x_2 happy_x_1 = case happyOut37 happy_x_2 of { (HappyWrap37 happy_var_2) -> happyIn37 (BNFC.Abs.ListCat happy_var_2 )} happyReduce_55 = happySpecReduce_1 10# happyReduction_55 happyReduction_55 happy_x_1 = case happyOut31 happy_x_1 of { (HappyWrap31 happy_var_1) -> happyIn37 (BNFC.Abs.IdCat happy_var_1 )} happyReduce_56 = happySpecReduce_0 11# happyReduction_56 happyReduction_56 = happyIn38 ([] ) happyReduce_57 = happySpecReduce_1 11# happyReduction_57 happyReduction_57 happy_x_1 = case happyOut37 happy_x_1 of { (HappyWrap37 happy_var_1) -> happyIn38 ((:[]) happy_var_1 )} happyReduce_58 = happySpecReduce_3 11# happyReduction_58 happyReduction_58 happy_x_3 happy_x_2 happy_x_1 = case happyOut37 happy_x_1 of { (HappyWrap37 happy_var_1) -> case happyOut38 happy_x_3 of { (HappyWrap38 happy_var_3) -> happyIn38 ((:) happy_var_1 happy_var_3 )}} happyReduce_59 = happySpecReduce_1 12# happyReduction_59 happyReduction_59 happy_x_1 = case happyOut31 happy_x_1 of { (HappyWrap31 happy_var_1) -> happyIn39 (BNFC.Abs.Id happy_var_1 )} happyReduce_60 = happySpecReduce_1 12# happyReduction_60 happyReduction_60 happy_x_1 = happyIn39 (BNFC.Abs.Wild ) happyReduce_61 = happySpecReduce_2 12# happyReduction_61 happyReduction_61 happy_x_2 happy_x_1 = happyIn39 (BNFC.Abs.ListE ) happyReduce_62 = happySpecReduce_3 12# happyReduction_62 happyReduction_62 happy_x_3 happy_x_2 happy_x_1 = happyIn39 (BNFC.Abs.ListCons ) happyReduce_63 = happyReduce 5# 12# happyReduction_63 happyReduction_63 (happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = happyIn39 (BNFC.Abs.ListOne ) `HappyStk` happyRest happyReduce_64 = happySpecReduce_1 13# happyReduction_64 happyReduction_64 happy_x_1 = case happyOut31 happy_x_1 of { (HappyWrap31 happy_var_1) -> happyIn40 (BNFC.Abs.Arg happy_var_1 )} happyReduce_65 = happySpecReduce_0 14# happyReduction_65 happyReduction_65 = happyIn41 ([] ) happyReduce_66 = happySpecReduce_2 14# happyReduction_66 happyReduction_66 happy_x_2 happy_x_1 = case happyOut40 happy_x_1 of { (HappyWrap40 happy_var_1) -> case happyOut41 happy_x_2 of { (HappyWrap41 happy_var_2) -> happyIn41 ((:) happy_var_1 happy_var_2 )}} happyReduce_67 = happySpecReduce_0 15# happyReduction_67 happyReduction_67 = happyIn42 (BNFC.Abs.SepNone ) happyReduce_68 = happySpecReduce_2 15# happyReduction_68 happyReduction_68 happy_x_2 happy_x_1 = case happyOut30 happy_x_2 of { (HappyWrap30 happy_var_2) -> happyIn42 (BNFC.Abs.SepTerm happy_var_2 )} happyReduce_69 = happySpecReduce_2 15# happyReduction_69 happyReduction_69 happy_x_2 happy_x_1 = case happyOut30 happy_x_2 of { (HappyWrap30 happy_var_2) -> happyIn42 (BNFC.Abs.SepSepar happy_var_2 )} happyReduce_70 = happySpecReduce_1 16# happyReduction_70 happyReduction_70 happy_x_1 = case happyOut30 happy_x_1 of { (HappyWrap30 happy_var_1) -> happyIn43 ((:[]) happy_var_1 )} happyReduce_71 = happySpecReduce_3 16# happyReduction_71 happyReduction_71 happy_x_3 happy_x_2 happy_x_1 = case happyOut30 happy_x_1 of { (HappyWrap30 happy_var_1) -> case happyOut43 happy_x_3 of { (HappyWrap43 happy_var_3) -> happyIn43 ((:) happy_var_1 happy_var_3 )}} happyReduce_72 = happySpecReduce_3 17# happyReduction_72 happyReduction_72 happy_x_3 happy_x_2 happy_x_1 = case happyOut45 happy_x_1 of { (HappyWrap45 happy_var_1) -> case happyOut44 happy_x_3 of { (HappyWrap44 happy_var_3) -> happyIn44 (BNFC.Abs.Cons happy_var_1 happy_var_3 )}} happyReduce_73 = happySpecReduce_1 17# happyReduction_73 happyReduction_73 happy_x_1 = case happyOut45 happy_x_1 of { (HappyWrap45 happy_var_1) -> happyIn44 (happy_var_1 )} happyReduce_74 = happySpecReduce_2 18# happyReduction_74 happyReduction_74 happy_x_2 happy_x_1 = case happyOut31 happy_x_1 of { (HappyWrap31 happy_var_1) -> case happyOut48 happy_x_2 of { (HappyWrap48 happy_var_2) -> happyIn45 (BNFC.Abs.App happy_var_1 happy_var_2 )}} happyReduce_75 = happySpecReduce_1 18# happyReduction_75 happyReduction_75 happy_x_1 = case happyOut46 happy_x_1 of { (HappyWrap46 happy_var_1) -> happyIn45 (happy_var_1 )} happyReduce_76 = happySpecReduce_1 19# happyReduction_76 happyReduction_76 happy_x_1 = case happyOut31 happy_x_1 of { (HappyWrap31 happy_var_1) -> happyIn46 (BNFC.Abs.Var happy_var_1 )} happyReduce_77 = happySpecReduce_1 19# happyReduction_77 happyReduction_77 happy_x_1 = case happyOut29 happy_x_1 of { (HappyWrap29 happy_var_1) -> happyIn46 (BNFC.Abs.LitInt happy_var_1 )} happyReduce_78 = happySpecReduce_1 19# happyReduction_78 happyReduction_78 happy_x_1 = case happyOut27 happy_x_1 of { (HappyWrap27 happy_var_1) -> happyIn46 (BNFC.Abs.LitChar happy_var_1 )} happyReduce_79 = happySpecReduce_1 19# happyReduction_79 happyReduction_79 happy_x_1 = case happyOut30 happy_x_1 of { (HappyWrap30 happy_var_1) -> happyIn46 (BNFC.Abs.LitString happy_var_1 )} happyReduce_80 = happySpecReduce_1 19# happyReduction_80 happyReduction_80 happy_x_1 = case happyOut28 happy_x_1 of { (HappyWrap28 happy_var_1) -> happyIn46 (BNFC.Abs.LitDouble happy_var_1 )} happyReduce_81 = happySpecReduce_3 19# happyReduction_81 happyReduction_81 happy_x_3 happy_x_2 happy_x_1 = case happyOut47 happy_x_2 of { (HappyWrap47 happy_var_2) -> happyIn46 (BNFC.Abs.List happy_var_2 )} happyReduce_82 = happySpecReduce_3 19# happyReduction_82 happyReduction_82 happy_x_3 happy_x_2 happy_x_1 = case happyOut44 happy_x_2 of { (HappyWrap44 happy_var_2) -> happyIn46 (happy_var_2 )} happyReduce_83 = happySpecReduce_0 20# happyReduction_83 happyReduction_83 = happyIn47 ([] ) happyReduce_84 = happySpecReduce_1 20# happyReduction_84 happyReduction_84 happy_x_1 = case happyOut44 happy_x_1 of { (HappyWrap44 happy_var_1) -> happyIn47 ((:[]) happy_var_1 )} happyReduce_85 = happySpecReduce_3 20# happyReduction_85 happyReduction_85 happy_x_3 happy_x_2 happy_x_1 = case happyOut44 happy_x_1 of { (HappyWrap44 happy_var_1) -> case happyOut47 happy_x_3 of { (HappyWrap47 happy_var_3) -> happyIn47 ((:) happy_var_1 happy_var_3 )}} happyReduce_86 = happySpecReduce_1 21# happyReduction_86 happyReduction_86 happy_x_1 = case happyOut46 happy_x_1 of { (HappyWrap46 happy_var_1) -> happyIn48 ((:[]) happy_var_1 )} happyReduce_87 = happySpecReduce_2 21# happyReduction_87 happyReduction_87 happy_x_2 happy_x_1 = case happyOut46 happy_x_1 of { (HappyWrap46 happy_var_1) -> case happyOut48 happy_x_2 of { (HappyWrap48 happy_var_2) -> happyIn48 ((:) happy_var_1 happy_var_2 )}} happyReduce_88 = happySpecReduce_1 22# happyReduction_88 happyReduction_88 happy_x_1 = case happyOut36 happy_x_1 of { (HappyWrap36 happy_var_1) -> happyIn49 (BNFC.Abs.RHS happy_var_1 )} happyReduce_89 = happySpecReduce_1 23# happyReduction_89 happyReduction_89 happy_x_1 = case happyOut49 happy_x_1 of { (HappyWrap49 happy_var_1) -> happyIn50 ((:[]) happy_var_1 )} happyReduce_90 = happySpecReduce_3 23# happyReduction_90 happyReduction_90 happy_x_3 happy_x_2 happy_x_1 = case happyOut49 happy_x_1 of { (HappyWrap49 happy_var_1) -> case happyOut50 happy_x_3 of { (HappyWrap50 happy_var_3) -> happyIn50 ((:) happy_var_1 happy_var_3 )}} happyReduce_91 = happySpecReduce_1 24# happyReduction_91 happyReduction_91 happy_x_1 = happyIn51 (BNFC.Abs.MNonempty ) happyReduce_92 = happySpecReduce_0 24# happyReduction_92 happyReduction_92 = happyIn51 (BNFC.Abs.MEmpty ) happyReduce_93 = happySpecReduce_3 25# happyReduction_93 happyReduction_93 happy_x_3 happy_x_2 happy_x_1 = case happyOut52 happy_x_1 of { (HappyWrap52 happy_var_1) -> case happyOut53 happy_x_3 of { (HappyWrap53 happy_var_3) -> happyIn52 (BNFC.Abs.RAlt happy_var_1 happy_var_3 )}} happyReduce_94 = happySpecReduce_1 25# happyReduction_94 happyReduction_94 happy_x_1 = case happyOut53 happy_x_1 of { (HappyWrap53 happy_var_1) -> happyIn52 (happy_var_1 )} happyReduce_95 = happySpecReduce_3 26# happyReduction_95 happyReduction_95 happy_x_3 happy_x_2 happy_x_1 = case happyOut53 happy_x_1 of { (HappyWrap53 happy_var_1) -> case happyOut54 happy_x_3 of { (HappyWrap54 happy_var_3) -> happyIn53 (BNFC.Abs.RMinus happy_var_1 happy_var_3 )}} happyReduce_96 = happySpecReduce_1 26# happyReduction_96 happyReduction_96 happy_x_1 = case happyOut54 happy_x_1 of { (HappyWrap54 happy_var_1) -> happyIn53 (happy_var_1 )} happyReduce_97 = happySpecReduce_2 27# happyReduction_97 happyReduction_97 happy_x_2 happy_x_1 = case happyOut54 happy_x_1 of { (HappyWrap54 happy_var_1) -> case happyOut55 happy_x_2 of { (HappyWrap55 happy_var_2) -> happyIn54 (BNFC.Abs.RSeq happy_var_1 happy_var_2 )}} happyReduce_98 = happySpecReduce_1 27# happyReduction_98 happyReduction_98 happy_x_1 = case happyOut55 happy_x_1 of { (HappyWrap55 happy_var_1) -> happyIn54 (happy_var_1 )} happyReduce_99 = happySpecReduce_2 28# happyReduction_99 happyReduction_99 happy_x_2 happy_x_1 = case happyOut55 happy_x_1 of { (HappyWrap55 happy_var_1) -> happyIn55 (BNFC.Abs.RStar happy_var_1 )} happyReduce_100 = happySpecReduce_2 28# happyReduction_100 happyReduction_100 happy_x_2 happy_x_1 = case happyOut55 happy_x_1 of { (HappyWrap55 happy_var_1) -> happyIn55 (BNFC.Abs.RPlus happy_var_1 )} happyReduce_101 = happySpecReduce_2 28# happyReduction_101 happyReduction_101 happy_x_2 happy_x_1 = case happyOut55 happy_x_1 of { (HappyWrap55 happy_var_1) -> happyIn55 (BNFC.Abs.ROpt happy_var_1 )} happyReduce_102 = happySpecReduce_1 28# happyReduction_102 happyReduction_102 happy_x_1 = happyIn55 (BNFC.Abs.REps ) happyReduce_103 = happySpecReduce_1 28# happyReduction_103 happyReduction_103 happy_x_1 = case happyOut27 happy_x_1 of { (HappyWrap27 happy_var_1) -> happyIn55 (BNFC.Abs.RChar happy_var_1 )} happyReduce_104 = happySpecReduce_3 28# happyReduction_104 happyReduction_104 happy_x_3 happy_x_2 happy_x_1 = case happyOut30 happy_x_2 of { (HappyWrap30 happy_var_2) -> happyIn55 (BNFC.Abs.RAlts happy_var_2 )} happyReduce_105 = happySpecReduce_3 28# happyReduction_105 happyReduction_105 happy_x_3 happy_x_2 happy_x_1 = case happyOut30 happy_x_2 of { (HappyWrap30 happy_var_2) -> happyIn55 (BNFC.Abs.RSeqs happy_var_2 )} happyReduce_106 = happySpecReduce_1 28# happyReduction_106 happyReduction_106 happy_x_1 = happyIn55 (BNFC.Abs.RDigit ) happyReduce_107 = happySpecReduce_1 28# happyReduction_107 happyReduction_107 happy_x_1 = happyIn55 (BNFC.Abs.RLetter ) happyReduce_108 = happySpecReduce_1 28# happyReduction_108 happyReduction_108 happy_x_1 = happyIn55 (BNFC.Abs.RUpper ) happyReduce_109 = happySpecReduce_1 28# happyReduction_109 happyReduction_109 happy_x_1 = happyIn55 (BNFC.Abs.RLower ) happyReduce_110 = happySpecReduce_1 28# happyReduction_110 happyReduction_110 happy_x_1 = happyIn55 (BNFC.Abs.RAny ) happyReduce_111 = happySpecReduce_3 28# happyReduction_111 happyReduction_111 happy_x_3 happy_x_2 happy_x_1 = case happyOut52 happy_x_2 of { (HappyWrap52 happy_var_2) -> happyIn55 (happy_var_2 )} happyNewToken action sts stk [] = happyDoAction 45# notHappyAtAll action sts stk [] happyNewToken action sts stk (tk:tks) = let cont i = happyDoAction i tk action sts stk tks in case tk of { PT _ (TS _ 1) -> cont 1#; PT _ (TS _ 2) -> cont 2#; PT _ (TS _ 3) -> cont 3#; PT _ (TS _ 4) -> cont 4#; PT _ (TS _ 5) -> cont 5#; PT _ (TS _ 6) -> cont 6#; PT _ (TS _ 7) -> cont 7#; PT _ (TS _ 8) -> cont 8#; PT _ (TS _ 9) -> cont 9#; PT _ (TS _ 10) -> cont 10#; PT _ (TS _ 11) -> cont 11#; PT _ (TS _ 12) -> cont 12#; PT _ (TS _ 13) -> cont 13#; PT _ (TS _ 14) -> cont 14#; PT _ (TS _ 15) -> cont 15#; PT _ (TS _ 16) -> cont 16#; PT _ (TS _ 17) -> cont 17#; PT _ (TS _ 18) -> cont 18#; PT _ (TS _ 19) -> cont 19#; PT _ (TS _ 20) -> cont 20#; PT _ (TS _ 21) -> cont 21#; PT _ (TS _ 22) -> cont 22#; PT _ (TS _ 23) -> cont 23#; PT _ (TS _ 24) -> cont 24#; PT _ (TS _ 25) -> cont 25#; PT _ (TS _ 26) -> cont 26#; PT _ (TS _ 27) -> cont 27#; PT _ (TS _ 28) -> cont 28#; PT _ (TS _ 29) -> cont 29#; PT _ (TS _ 30) -> cont 30#; PT _ (TS _ 31) -> cont 31#; PT _ (TS _ 32) -> cont 32#; PT _ (TS _ 33) -> cont 33#; PT _ (TS _ 34) -> cont 34#; PT _ (TS _ 35) -> cont 35#; PT _ (TS _ 36) -> cont 36#; PT _ (TS _ 37) -> cont 37#; PT _ (TS _ 38) -> cont 38#; PT _ (TS _ 39) -> cont 39#; PT _ (TC happy_dollar_dollar) -> cont 40#; PT _ (TD happy_dollar_dollar) -> cont 41#; PT _ (TI happy_dollar_dollar) -> cont 42#; PT _ (TL happy_dollar_dollar) -> cont 43#; PT _ (T_Identifier _) -> cont 44#; _ -> happyError' ((tk:tks), []) } happyError_ explist 45# tk tks = happyError' (tks, explist) happyError_ explist _ tk tks = happyError' ((tk:tks), explist) happyThen :: () => Err a -> (a -> Err b) -> Err b happyThen = ((>>=)) happyReturn :: () => a -> Err a happyReturn = (return) happyThen1 m k tks = ((>>=)) m (\a -> k a tks) happyReturn1 :: () => a -> b -> Err a happyReturn1 = \a tks -> (return) a happyError' :: () => ([(Token)], [Prelude.String]) -> Err a happyError' = (\(tokens, _) -> happyError tokens) pGrammar tks = happySomeParser where happySomeParser = happyThen (happyParse 0# tks) (\x -> happyReturn (let {(HappyWrap32 x') = happyOut32 x} in x')) pListDef tks = happySomeParser where happySomeParser = happyThen (happyParse 1# tks) (\x -> happyReturn (let {(HappyWrap33 x') = happyOut33 x} in x')) pDef tks = happySomeParser where happySomeParser = happyThen (happyParse 2# tks) (\x -> happyReturn (let {(HappyWrap34 x') = happyOut34 x} in x')) pItem tks = happySomeParser where happySomeParser = happyThen (happyParse 3# tks) (\x -> happyReturn (let {(HappyWrap35 x') = happyOut35 x} in x')) pListItem tks = happySomeParser where happySomeParser = happyThen (happyParse 4# tks) (\x -> happyReturn (let {(HappyWrap36 x') = happyOut36 x} in x')) pCat tks = happySomeParser where happySomeParser = happyThen (happyParse 5# tks) (\x -> happyReturn (let {(HappyWrap37 x') = happyOut37 x} in x')) pListCat tks = happySomeParser where happySomeParser = happyThen (happyParse 6# tks) (\x -> happyReturn (let {(HappyWrap38 x') = happyOut38 x} in x')) pLabel tks = happySomeParser where happySomeParser = happyThen (happyParse 7# tks) (\x -> happyReturn (let {(HappyWrap39 x') = happyOut39 x} in x')) pArg tks = happySomeParser where happySomeParser = happyThen (happyParse 8# tks) (\x -> happyReturn (let {(HappyWrap40 x') = happyOut40 x} in x')) pListArg tks = happySomeParser where happySomeParser = happyThen (happyParse 9# tks) (\x -> happyReturn (let {(HappyWrap41 x') = happyOut41 x} in x')) pSeparation tks = happySomeParser where happySomeParser = happyThen (happyParse 10# tks) (\x -> happyReturn (let {(HappyWrap42 x') = happyOut42 x} in x')) pListString tks = happySomeParser where happySomeParser = happyThen (happyParse 11# tks) (\x -> happyReturn (let {(HappyWrap43 x') = happyOut43 x} in x')) pExp tks = happySomeParser where happySomeParser = happyThen (happyParse 12# tks) (\x -> happyReturn (let {(HappyWrap44 x') = happyOut44 x} in x')) pExp1 tks = happySomeParser where happySomeParser = happyThen (happyParse 13# tks) (\x -> happyReturn (let {(HappyWrap45 x') = happyOut45 x} in x')) pExp2 tks = happySomeParser where happySomeParser = happyThen (happyParse 14# tks) (\x -> happyReturn (let {(HappyWrap46 x') = happyOut46 x} in x')) pListExp tks = happySomeParser where happySomeParser = happyThen (happyParse 15# tks) (\x -> happyReturn (let {(HappyWrap47 x') = happyOut47 x} in x')) pListExp2 tks = happySomeParser where happySomeParser = happyThen (happyParse 16# tks) (\x -> happyReturn (let {(HappyWrap48 x') = happyOut48 x} in x')) pRHS tks = happySomeParser where happySomeParser = happyThen (happyParse 17# tks) (\x -> happyReturn (let {(HappyWrap49 x') = happyOut49 x} in x')) pListRHS tks = happySomeParser where happySomeParser = happyThen (happyParse 18# tks) (\x -> happyReturn (let {(HappyWrap50 x') = happyOut50 x} in x')) pMinimumSize tks = happySomeParser where happySomeParser = happyThen (happyParse 19# tks) (\x -> happyReturn (let {(HappyWrap51 x') = happyOut51 x} in x')) pReg tks = happySomeParser where happySomeParser = happyThen (happyParse 20# tks) (\x -> happyReturn (let {(HappyWrap52 x') = happyOut52 x} in x')) pReg1 tks = happySomeParser where happySomeParser = happyThen (happyParse 21# tks) (\x -> happyReturn (let {(HappyWrap53 x') = happyOut53 x} in x')) pReg2 tks = happySomeParser where happySomeParser = happyThen (happyParse 22# tks) (\x -> happyReturn (let {(HappyWrap54 x') = happyOut54 x} in x')) pReg3 tks = happySomeParser where happySomeParser = happyThen (happyParse 23# tks) (\x -> happyReturn (let {(HappyWrap55 x') = happyOut55 x} in x')) happySeq = happyDontSeq type Err = Either String happyError :: [Token] -> Err a happyError ts = Left $ "syntax error at " ++ tokenPos ts ++ case ts of [] -> [] [Err _] -> " due to lexer error" t:_ -> " before `" ++ (prToken t) ++ "'" myLexer :: String -> [Token] myLexer = tokens #define HAPPY_ARRAY 1 #define HAPPY_GHC 1 #define HAPPY_COERCE 1 -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ #ifdef HAPPY_GHC # if !defined(__GLASGOW_HASKELL__) # error `HAPPY_GHC` is defined but this code isn't being built with GHC. # endif # define ILIT(n) n# # define IBOX(n) (Happy_GHC_Exts.I# (n)) # define FAST_INT Happy_GHC_Exts.Int# -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. # if __GLASGOW_HASKELL__ > 706 # define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) # define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) # define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) # else # define LT(n,m) (n Happy_GHC_Exts.<# m) # define GTE(n,m) (n Happy_GHC_Exts.>=# m) # define EQ(n,m) (n Happy_GHC_Exts.==# m) # endif # define PLUS(n,m) (n Happy_GHC_Exts.+# m) # define MINUS(n,m) (n Happy_GHC_Exts.-# m) # define TIMES(n,m) (n Happy_GHC_Exts.*# m) # define NEGATE(n) (Happy_GHC_Exts.negateInt# (n)) # define IF_GHC(x) (x) #else # define ILIT(n) (n) # define IBOX(n) (n) # define FAST_INT Prelude.Int # define LT(n,m) (n Prelude.< m) # define GTE(n,m) (n Prelude.>= m) # define EQ(n,m) (n Prelude.== m) # define PLUS(n,m) (n Prelude.+ m) # define MINUS(n,m) (n Prelude.- m) # define TIMES(n,m) (n Prelude.* m) # define NEGATE(n) (Prelude.negate (n)) # define IF_GHC(x) #endif data Happy_IntList = HappyCons FAST_INT Happy_IntList #if defined(HAPPY_ARRAY) # define CONS(h,t) (HappyCons (h) (t)) #else # define CONS(h,t) ((h):(t)) #endif #if defined(HAPPY_ARRAY) # define ERROR_TOK ILIT(0) # define DO_ACTION(state,i,tk,sts,stk) happyDoAction i tk state sts (stk) # define HAPPYSTATE(i) (i) # define GOTO(action) happyGoto # define IF_ARRAYS(x) (x) #else # define ERROR_TOK ILIT(1) # define DO_ACTION(state,i,tk,sts,stk) state i i tk HAPPYSTATE(state) sts (stk) # define HAPPYSTATE(i) (HappyState (i)) # define GOTO(action) action # define IF_ARRAYS(x) #endif #if defined(HAPPY_COERCE) # if !defined(HAPPY_GHC) # error `HAPPY_COERCE` requires `HAPPY_GHC` # endif # define GET_ERROR_TOKEN(x) (case Happy_GHC_Exts.unsafeCoerce# x of { IBOX(i) -> i }) # define MK_ERROR_TOKEN(i) (Happy_GHC_Exts.unsafeCoerce# IBOX(i)) # define MK_TOKEN(x) (happyInTok (x)) #else # define GET_ERROR_TOKEN(x) (case x of { HappyErrorToken IBOX(i) -> i }) # define MK_ERROR_TOKEN(i) (HappyErrorToken IBOX(i)) # define MK_TOKEN(x) (HappyTerminal (x)) #endif #if defined(HAPPY_DEBUG) # define DEBUG_TRACE(s) (happyTrace (s)) $ happyTrace string expr = Happy_System_IO_Unsafe.unsafePerformIO $ do Happy_System_IO.hPutStr Happy_System_IO.stderr string return expr #else # define DEBUG_TRACE(s) {- nothing -} #endif infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept ERROR_TOK tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = IF_GHC(happyTcHack j IF_ARRAYS(happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action #if defined(HAPPY_ARRAY) happyDoAction i tk st = DEBUG_TRACE("state: " ++ show IBOX(st) ++ ",\ttoken: " ++ show IBOX(i) ++ ",\taction: ") case action of ILIT(0) -> DEBUG_TRACE("fail.\n") happyFail (happyExpListPerState (IBOX(st) :: Prelude.Int)) i tk st ILIT(-1) -> DEBUG_TRACE("accept.\n") happyAccept i tk st n | LT(n,(ILIT(0) :: FAST_INT)) -> DEBUG_TRACE("reduce (rule " ++ show rule ++ ")") (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = IBOX(NEGATE(PLUS(n,(ILIT(1) :: FAST_INT)))) n -> DEBUG_TRACE("shift, enter state " ++ show IBOX(new_state) ++ "\n") happyShift new_state i tk st where new_state = MINUS(n,(ILIT(1) :: FAST_INT)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = PLUS(off, i) check = if GTE(off_i,(ILIT(0) :: FAST_INT)) then EQ(indexShortOffAddr happyCheck off_i, i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st #endif /* HAPPY_ARRAY */ #ifdef HAPPY_GHC indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# #else indexShortOffAddr arr off = arr Happy_Data_Array.! off #endif {-# INLINE happyLt #-} happyLt x y = LT(x,y) #ifdef HAPPY_GHC readArrayBit arr bit = Bits.testBit IBOX(indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#)) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x #else readArrayBit arr bit = Bits.testBit IBOX(indexShortOffAddr arr (bit `Prelude.div` 16)) (bit `Prelude.mod` 16) #endif #ifdef HAPPY_GHC data HappyAddr = HappyA# Happy_GHC_Exts.Addr# #endif ----------------------------------------------------------------------------- -- HappyState data type (not arrays) #if !defined(HAPPY_ARRAY) newtype HappyState b c = HappyState (FAST_INT -> -- token number FAST_INT -> -- token number (yes, again) b -> -- token semantic value HappyState b c -> -- current state [HappyState b c] -> -- state stack c) #endif ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state ERROR_TOK tk st sts stk@(x `HappyStk` _) = let i = GET_ERROR_TOKEN(x) in -- trace "shifting the error token" $ DO_ACTION(new_state,i,tk,CONS(st,sts),stk) happyShift new_state i tk st sts stk = happyNewToken new_state CONS(st,sts) (MK_TOKEN(tk)`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn ERROR_TOK tk st sts stk = happyFail [] ERROR_TOK tk st sts stk happySpecReduce_0 nt fn j tk st@(HAPPYSTATE(action)) sts stk = GOTO(action) nt j tk st CONS(st,sts) (fn `HappyStk` stk) happySpecReduce_1 i fn ERROR_TOK tk st sts stk = happyFail [] ERROR_TOK tk st sts stk happySpecReduce_1 nt fn j tk _ sts@(CONS(st@HAPPYSTATE(action),_)) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (GOTO(action) nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn ERROR_TOK tk st sts stk = happyFail [] ERROR_TOK tk st sts stk happySpecReduce_2 nt fn j tk _ CONS(_,sts@(CONS(st@HAPPYSTATE(action),_))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (GOTO(action) nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn ERROR_TOK tk st sts stk = happyFail [] ERROR_TOK tk st sts stk happySpecReduce_3 nt fn j tk _ CONS(_,CONS(_,sts@(CONS(st@HAPPYSTATE(action),_)))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (GOTO(action) nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn ERROR_TOK tk st sts stk = happyFail [] ERROR_TOK tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop MINUS(k,(ILIT(1) :: FAST_INT)) sts of sts1@(CONS(st1@HAPPYSTATE(action),_)) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (GOTO(action) nt j tk st1 sts1 r) happyMonadReduce k nt fn ERROR_TOK tk st sts stk = happyFail [] ERROR_TOK tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k CONS(st,sts) of sts1@(CONS(st1@HAPPYSTATE(action),_)) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> GOTO(action) nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn ERROR_TOK tk st sts stk = happyFail [] ERROR_TOK tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k CONS(st,sts) of sts1@(CONS(st1@HAPPYSTATE(action),_)) -> let drop_stk = happyDropStk k stk #if defined(HAPPY_ARRAY) off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = PLUS(off, nt) new_state = indexShortOffAddr happyTable off_i #else _ = nt :: FAST_INT new_state = action #endif in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop ILIT(0) l = l happyDrop n CONS(_,t) = happyDrop MINUS(n,(ILIT(1) :: FAST_INT)) t happyDropStk ILIT(0) l = l happyDropStk n (x `HappyStk` xs) = happyDropStk MINUS(n,(ILIT(1)::FAST_INT)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction #if defined(HAPPY_ARRAY) happyGoto nt j tk st = DEBUG_TRACE(", goto state " ++ show IBOX(new_state) ++ "\n") happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = PLUS(off, nt) new_state = indexShortOffAddr happyTable off_i #else happyGoto action j tk st = action j j tk (HappyState action) #endif ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist ERROR_TOK tk old_st _ stk@(x `HappyStk` _) = let i = GET_ERROR_TOKEN(x) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk HAPPYSTATE(action) sts stk = -- trace "entering error recovery" $ DO_ACTION(action,ERROR_TOK,tk,sts, MK_ERROR_TOKEN(i) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions #if defined(HAPPY_GHC) happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} #endif ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. #if defined(HAPPY_ARRAY) {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} #endif {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. BNFC-2.9.5/src/BNFC/Par.y0000644000000000000000000001671407346545000012654 0ustar0000000000000000-- -*- haskell -*- File generated by the BNF Converter (bnfc 2.9.3). -- Parser definition for use with Happy { {-# OPTIONS_GHC -fno-warn-incomplete-patterns -fno-warn-overlapping-patterns #-} {-# LANGUAGE PatternSynonyms #-} module BNFC.Par ( happyError , myLexer , pGrammar , pListDef , pDef , pItem , pListItem , pCat , pListCat , pLabel , pArg , pListArg , pSeparation , pListString , pExp , pExp1 , pExp2 , pListExp , pListExp2 , pRHS , pListRHS , pMinimumSize , pReg , pReg1 , pReg2 , pReg3 ) where import Prelude import qualified BNFC.Abs import BNFC.Lex } %name pGrammar Grammar %name pListDef ListDef %name pDef Def %name pItem Item %name pListItem ListItem %name pCat Cat %name pListCat ListCat %name pLabel Label %name pArg Arg %name pListArg ListArg %name pSeparation Separation %name pListString ListString %name pExp Exp %name pExp1 Exp1 %name pExp2 Exp2 %name pListExp ListExp %name pListExp2 ListExp2 %name pRHS RHS %name pListRHS ListRHS %name pMinimumSize MinimumSize %name pReg Reg %name pReg1 Reg1 %name pReg2 Reg2 %name pReg3 Reg3 -- no lexer declaration %monad { Err } { (>>=) } { return } %tokentype {Token} %token '(' { PT _ (TS _ 1) } ')' { PT _ (TS _ 2) } '*' { PT _ (TS _ 3) } '+' { PT _ (TS _ 4) } ',' { PT _ (TS _ 5) } '-' { PT _ (TS _ 6) } '.' { PT _ (TS _ 7) } ':' { PT _ (TS _ 8) } '::=' { PT _ (TS _ 9) } ';' { PT _ (TS _ 10) } '=' { PT _ (TS _ 11) } '?' { PT _ (TS _ 12) } '[' { PT _ (TS _ 13) } ']' { PT _ (TS _ 14) } '_' { PT _ (TS _ 15) } 'char' { PT _ (TS _ 16) } 'coercions' { PT _ (TS _ 17) } 'comment' { PT _ (TS _ 18) } 'define' { PT _ (TS _ 19) } 'delimiters' { PT _ (TS _ 20) } 'digit' { PT _ (TS _ 21) } 'entrypoints' { PT _ (TS _ 22) } 'eps' { PT _ (TS _ 23) } 'internal' { PT _ (TS _ 24) } 'layout' { PT _ (TS _ 25) } 'letter' { PT _ (TS _ 26) } 'lower' { PT _ (TS _ 27) } 'nonempty' { PT _ (TS _ 28) } 'position' { PT _ (TS _ 29) } 'rules' { PT _ (TS _ 30) } 'separator' { PT _ (TS _ 31) } 'stop' { PT _ (TS _ 32) } 'terminator' { PT _ (TS _ 33) } 'token' { PT _ (TS _ 34) } 'toplevel' { PT _ (TS _ 35) } 'upper' { PT _ (TS _ 36) } '{' { PT _ (TS _ 37) } '|' { PT _ (TS _ 38) } '}' { PT _ (TS _ 39) } L_charac { PT _ (TC $$) } L_doubl { PT _ (TD $$) } L_integ { PT _ (TI $$) } L_quoted { PT _ (TL $$) } L_Identifier { PT _ (T_Identifier _) } %% Char :: { Char } Char : L_charac { (read $1) :: Char } Double :: { Double } Double : L_doubl { (read $1) :: Double } Integer :: { Integer } Integer : L_integ { (read $1) :: Integer } String :: { String } String : L_quoted { $1 } Identifier :: { BNFC.Abs.Identifier } Identifier : L_Identifier { BNFC.Abs.Identifier (mkPosToken $1) } Grammar :: { BNFC.Abs.Grammar } Grammar : ListDef { BNFC.Abs.Grammar $1 } ListDef :: { [BNFC.Abs.Def] } ListDef : {- empty -} { [] } | Def { (:[]) $1 } | Def ';' ListDef { (:) $1 $3 } | ';' ListDef { $2 } Def :: { BNFC.Abs.Def } Def : Label '.' Cat '::=' ListItem { BNFC.Abs.Rule $1 $3 $5 } | 'comment' String { BNFC.Abs.Comment $2 } | 'comment' String String { BNFC.Abs.Comments $2 $3 } | 'internal' Label '.' Cat '::=' ListItem { BNFC.Abs.Internal $2 $4 $6 } | 'token' Identifier Reg { BNFC.Abs.Token $2 $3 } | 'position' 'token' Identifier Reg { BNFC.Abs.PosToken $3 $4 } | 'entrypoints' ListCat { BNFC.Abs.Entryp $2 } | 'separator' MinimumSize Cat String { BNFC.Abs.Separator $2 $3 $4 } | 'terminator' MinimumSize Cat String { BNFC.Abs.Terminator $2 $3 $4 } | 'delimiters' Cat String String Separation MinimumSize { BNFC.Abs.Delimiters $2 $3 $4 $5 $6 } | 'coercions' Identifier Integer { BNFC.Abs.Coercions $2 $3 } | 'rules' Identifier '::=' ListRHS { BNFC.Abs.Rules $2 $4 } | 'define' Identifier ListArg '=' Exp { BNFC.Abs.Function $2 $3 $5 } | 'layout' ListString { BNFC.Abs.Layout $2 } | 'layout' 'stop' ListString { BNFC.Abs.LayoutStop $3 } | 'layout' 'toplevel' { BNFC.Abs.LayoutTop } Item :: { BNFC.Abs.Item } Item : String { BNFC.Abs.Terminal $1 } | Cat { BNFC.Abs.NTerminal $1 } ListItem :: { [BNFC.Abs.Item] } ListItem : {- empty -} { [] } | Item ListItem { (:) $1 $2 } Cat :: { BNFC.Abs.Cat } Cat : '[' Cat ']' { BNFC.Abs.ListCat $2 } | Identifier { BNFC.Abs.IdCat $1 } ListCat :: { [BNFC.Abs.Cat] } ListCat : {- empty -} { [] } | Cat { (:[]) $1 } | Cat ',' ListCat { (:) $1 $3 } Label :: { BNFC.Abs.Label } Label : Identifier { BNFC.Abs.Id $1 } | '_' { BNFC.Abs.Wild } | '[' ']' { BNFC.Abs.ListE } | '(' ':' ')' { BNFC.Abs.ListCons } | '(' ':' '[' ']' ')' { BNFC.Abs.ListOne } Arg :: { BNFC.Abs.Arg } Arg : Identifier { BNFC.Abs.Arg $1 } ListArg :: { [BNFC.Abs.Arg] } ListArg : {- empty -} { [] } | Arg ListArg { (:) $1 $2 } Separation :: { BNFC.Abs.Separation } Separation : {- empty -} { BNFC.Abs.SepNone } | 'terminator' String { BNFC.Abs.SepTerm $2 } | 'separator' String { BNFC.Abs.SepSepar $2 } ListString :: { [String] } ListString : String { (:[]) $1 } | String ',' ListString { (:) $1 $3 } Exp :: { BNFC.Abs.Exp } Exp : Exp1 ':' Exp { BNFC.Abs.Cons $1 $3 } | Exp1 { $1 } Exp1 :: { BNFC.Abs.Exp } Exp1 : Identifier ListExp2 { BNFC.Abs.App $1 $2 } | Exp2 { $1 } Exp2 :: { BNFC.Abs.Exp } Exp2 : Identifier { BNFC.Abs.Var $1 } | Integer { BNFC.Abs.LitInt $1 } | Char { BNFC.Abs.LitChar $1 } | String { BNFC.Abs.LitString $1 } | Double { BNFC.Abs.LitDouble $1 } | '[' ListExp ']' { BNFC.Abs.List $2 } | '(' Exp ')' { $2 } ListExp :: { [BNFC.Abs.Exp] } ListExp : {- empty -} { [] } | Exp { (:[]) $1 } | Exp ',' ListExp { (:) $1 $3 } ListExp2 :: { [BNFC.Abs.Exp] } ListExp2 : Exp2 { (:[]) $1 } | Exp2 ListExp2 { (:) $1 $2 } RHS :: { BNFC.Abs.RHS } RHS : ListItem { BNFC.Abs.RHS $1 } ListRHS :: { [BNFC.Abs.RHS] } ListRHS : RHS { (:[]) $1 } | RHS '|' ListRHS { (:) $1 $3 } MinimumSize :: { BNFC.Abs.MinimumSize } MinimumSize : 'nonempty' { BNFC.Abs.MNonempty } | {- empty -} { BNFC.Abs.MEmpty } Reg :: { BNFC.Abs.Reg } Reg : Reg '|' Reg1 { BNFC.Abs.RAlt $1 $3 } | Reg1 { $1 } Reg1 :: { BNFC.Abs.Reg } Reg1 : Reg1 '-' Reg2 { BNFC.Abs.RMinus $1 $3 } | Reg2 { $1 } Reg2 :: { BNFC.Abs.Reg } Reg2 : Reg2 Reg3 { BNFC.Abs.RSeq $1 $2 } | Reg3 { $1 } Reg3 :: { BNFC.Abs.Reg } Reg3 : Reg3 '*' { BNFC.Abs.RStar $1 } | Reg3 '+' { BNFC.Abs.RPlus $1 } | Reg3 '?' { BNFC.Abs.ROpt $1 } | 'eps' { BNFC.Abs.REps } | Char { BNFC.Abs.RChar $1 } | '[' String ']' { BNFC.Abs.RAlts $2 } | '{' String '}' { BNFC.Abs.RSeqs $2 } | 'digit' { BNFC.Abs.RDigit } | 'letter' { BNFC.Abs.RLetter } | 'upper' { BNFC.Abs.RUpper } | 'lower' { BNFC.Abs.RLower } | 'char' { BNFC.Abs.RAny } | '(' Reg ')' { $2 } { type Err = Either String happyError :: [Token] -> Err a happyError ts = Left $ "syntax error at " ++ tokenPos ts ++ case ts of [] -> [] [Err _] -> " due to lexer error" t:_ -> " before `" ++ (prToken t) ++ "'" myLexer :: String -> [Token] myLexer = tokens } BNFC-2.9.5/src/BNFC/PrettyPrint.hs0000644000000000000000000000546307346545000014577 0ustar0000000000000000{-# LANGUAGE DefaultSignatures #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} -- | Extends ''Text.PrettyPrint''. module BNFC.PrettyPrint ( module Text.PrettyPrint , module BNFC.PrettyPrint ) where import Text.PrettyPrint -- | Overloaded function 'pretty'. class Pretty a where pretty :: a -> Doc prettyPrec :: Int -> a -> Doc {-# MINIMAL pretty | prettyPrec #-} pretty = prettyPrec 0 prettyPrec _ = pretty instance Pretty Int where pretty = text . show instance Pretty Integer where pretty = text . show instance Pretty String where pretty = text -- | Render as 'String'. prettyShow :: Pretty a => a -> String prettyShow = render . pretty -- | Put 'parens' around document if given condition is true. -- -- >>> parensIf True "foo" -- (foo) -- -- >>> parensIf False "bar" -- bar -- parensIf :: Bool -> Doc -> Doc parensIf = \case True -> parens False -> id -- | Separate vertically by a blank line. -- -- >>> "foo" $++$ "bar" -- foo -- -- bar -- -- >>> "foo" $++$ empty -- foo -- ($++$) :: Doc -> Doc -> Doc d $++$ d' | isEmpty d = d' | isEmpty d' = d | otherwise = d $+$ "" $+$ d' -- | List version of '$++$'. -- -- >>> vsep [ "foo", nest 4 "bar" ] -- foo -- -- bar -- -- >>> vsep [] -- -- vsep :: [Doc] -> Doc vsep = foldl ($++$) empty -- | List version of 'PrettyPrint.$+$'. -- -- >>> vcat' [text "abc", nest 4 (text "def")] -- abc -- def -- vcat' :: [Doc] -> Doc vcat' = foldl ($+$) empty -- | Pretty print separator with = (for assignments...). -- -- >>> "a" <=> "123" -- a = 123 -- (<=>) :: Doc -> Doc -> Doc a <=> b = a <+> text "=" <+> b -- | Print a list of 0-1 elements on the same line as some preamble -- and from 2 elements on the following lines, indented. -- -- >>> prettyList 2 ("foo" <+> equals) lbrack rbrack comma [] -- foo = [] -- >>> prettyList 2 ("foo" <+> equals) lbrack rbrack comma [ "a" ] -- foo = [a] -- >>> prettyList 2 ("foo" <+> equals) lbrack rbrack comma [ "a", "b" ] -- foo = -- [ a -- , b -- ] -- -- Used in the Agda backend. prettyList :: Int -- ^ Indentation. -> Doc -- ^ Preamble. -> Doc -- ^ Left parenthesis. -> Doc -- ^ Right parenthesis. -> Doc -- ^ Separator (usually not including spaces). -> [Doc] -- ^ List item. -> Doc prettyList n pre lpar rpar sepa = \case [] -> pre <+> hcat [ lpar, rpar ] [d] -> pre <+> hcat [ lpar, d, rpar ] (d:ds) -> vcat . (pre :) . map (nest n) . concat $ [ [ lpar <+> d ] , map (sepa <+>) ds , [ rpar ] ] -- | Code block. A block of C/Java code, surrounded by {} and indented. -- -- >>> codeblock 4 ["abc", "def"] -- { -- abc -- def -- } -- -- Used in the C backend. codeblock :: Int -> [Doc] -> Doc codeblock indent code = lbrace $+$ nest indent (vcat code) $+$ rbrace BNFC-2.9.5/src/BNFC/Print.hs0000644000000000000000000002532607346545000013367 0ustar0000000000000000-- File generated by the BNF Converter (bnfc 2.9.3). {-# LANGUAGE CPP #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE LambdaCase #-} #if __GLASGOW_HASKELL__ <= 708 {-# LANGUAGE OverlappingInstances #-} #endif {-# OPTIONS_GHC -fno-warn-incomplete-patterns #-} -- | Pretty-printer for BNFC. module BNFC.Print where import Prelude ( ($), (.) , Bool(..), (==), (<) , Int, Integer, Double, (+), (-), (*) , String, (++) , ShowS, showChar, showString , all, elem, foldr, id, map, null, replicate, shows, span ) import Data.Char ( Char, isSpace ) import qualified BNFC.Abs -- | The top-level printing method. printTree :: Print a => a -> String printTree = render . prt 0 type Doc = [ShowS] -> [ShowS] doc :: ShowS -> Doc doc = (:) render :: Doc -> String render d = rend 0 False (map ($ "") $ d []) "" where rend :: Int -- ^ Indentation level. -> Bool -- ^ Pending indentation to be output before next character? -> [String] -> ShowS rend i p = \case "[" :ts -> char '[' . rend i False ts "(" :ts -> char '(' . rend i False ts "{" :ts -> onNewLine i p . showChar '{' . new (i+1) ts "}" : ";":ts -> onNewLine (i-1) p . showString "};" . new (i-1) ts "}" :ts -> onNewLine (i-1) p . showChar '}' . new (i-1) ts [";"] -> char ';' ";" :ts -> char ';' . new i ts t : ts@(s:_) | closingOrPunctuation s -> pending . showString t . rend i False ts t :ts -> pending . space t . rend i False ts [] -> id where -- Output character after pending indentation. char :: Char -> ShowS char c = pending . showChar c -- Output pending indentation. pending :: ShowS pending = if p then indent i else id -- Indentation (spaces) for given indentation level. indent :: Int -> ShowS indent i = replicateS (2*i) (showChar ' ') -- Continue rendering in new line with new indentation. new :: Int -> [String] -> ShowS new j ts = showChar '\n' . rend j True ts -- Make sure we are on a fresh line. onNewLine :: Int -> Bool -> ShowS onNewLine i p = (if p then id else showChar '\n') . indent i -- Separate given string from following text by a space (if needed). space :: String -> ShowS space t s = case (all isSpace t', null spc, null rest) of (True , _ , True ) -> [] -- remove trailing space (False, _ , True ) -> t' -- remove trailing space (False, True, False) -> t' ++ ' ' : s -- add space if none _ -> t' ++ s where t' = showString t [] (spc, rest) = span isSpace s closingOrPunctuation :: String -> Bool closingOrPunctuation [c] = c `elem` closerOrPunct closingOrPunctuation _ = False closerOrPunct :: String closerOrPunct = ")],;" parenth :: Doc -> Doc parenth ss = doc (showChar '(') . ss . doc (showChar ')') concatS :: [ShowS] -> ShowS concatS = foldr (.) id concatD :: [Doc] -> Doc concatD = foldr (.) id replicateS :: Int -> ShowS -> ShowS replicateS n f = concatS (replicate n f) -- | The printer class does the job. class Print a where prt :: Int -> a -> Doc instance {-# OVERLAPPABLE #-} Print a => Print [a] where prt i = concatD . map (prt i) instance Print Char where prt _ c = doc (showChar '\'' . mkEsc '\'' c . showChar '\'') instance Print String where prt _ = printString printString :: String -> Doc printString s = doc (showChar '"' . concatS (map (mkEsc '"') s) . showChar '"') mkEsc :: Char -> Char -> ShowS mkEsc q = \case s | s == q -> showChar '\\' . showChar s '\\' -> showString "\\\\" '\n' -> showString "\\n" '\t' -> showString "\\t" s -> showChar s prPrec :: Int -> Int -> Doc -> Doc prPrec i j = if j < i then parenth else id instance Print Integer where prt _ x = doc (shows x) instance Print Double where prt _ x = doc (shows x) instance Print BNFC.Abs.Identifier where prt _ (BNFC.Abs.Identifier (_,i)) = doc $ showString i instance Print BNFC.Abs.Grammar where prt i = \case BNFC.Abs.Grammar defs -> prPrec i 0 (concatD [prt 0 defs]) instance Print [BNFC.Abs.Def] where prt _ [] = concatD [] prt _ [x] = concatD [prt 0 x] prt _ (x:xs) = concatD [prt 0 x, doc (showString ";"), prt 0 xs] instance Print BNFC.Abs.Def where prt i = \case BNFC.Abs.Rule label cat items -> prPrec i 0 (concatD [prt 0 label, doc (showString "."), prt 0 cat, doc (showString "::="), prt 0 items]) BNFC.Abs.Comment str -> prPrec i 0 (concatD [doc (showString "comment"), printString str]) BNFC.Abs.Comments str1 str2 -> prPrec i 0 (concatD [doc (showString "comment"), printString str1, printString str2]) BNFC.Abs.Internal label cat items -> prPrec i 0 (concatD [doc (showString "internal"), prt 0 label, doc (showString "."), prt 0 cat, doc (showString "::="), prt 0 items]) BNFC.Abs.Token identifier reg -> prPrec i 0 (concatD [doc (showString "token"), prt 0 identifier, prt 0 reg]) BNFC.Abs.PosToken identifier reg -> prPrec i 0 (concatD [doc (showString "position"), doc (showString "token"), prt 0 identifier, prt 0 reg]) BNFC.Abs.Entryp cats -> prPrec i 0 (concatD [doc (showString "entrypoints"), prt 0 cats]) BNFC.Abs.Separator minimumsize cat str -> prPrec i 0 (concatD [doc (showString "separator"), prt 0 minimumsize, prt 0 cat, printString str]) BNFC.Abs.Terminator minimumsize cat str -> prPrec i 0 (concatD [doc (showString "terminator"), prt 0 minimumsize, prt 0 cat, printString str]) BNFC.Abs.Delimiters cat str1 str2 separation minimumsize -> prPrec i 0 (concatD [doc (showString "delimiters"), prt 0 cat, printString str1, printString str2, prt 0 separation, prt 0 minimumsize]) BNFC.Abs.Coercions identifier n -> prPrec i 0 (concatD [doc (showString "coercions"), prt 0 identifier, prt 0 n]) BNFC.Abs.Rules identifier rhss -> prPrec i 0 (concatD [doc (showString "rules"), prt 0 identifier, doc (showString "::="), prt 0 rhss]) BNFC.Abs.Function identifier args exp -> prPrec i 0 (concatD [doc (showString "define"), prt 0 identifier, prt 0 args, doc (showString "="), prt 0 exp]) BNFC.Abs.Layout strs -> prPrec i 0 (concatD [doc (showString "layout"), prt 0 strs]) BNFC.Abs.LayoutStop strs -> prPrec i 0 (concatD [doc (showString "layout"), doc (showString "stop"), prt 0 strs]) BNFC.Abs.LayoutTop -> prPrec i 0 (concatD [doc (showString "layout"), doc (showString "toplevel")]) instance Print BNFC.Abs.Item where prt i = \case BNFC.Abs.Terminal str -> prPrec i 0 (concatD [printString str]) BNFC.Abs.NTerminal cat -> prPrec i 0 (concatD [prt 0 cat]) instance Print [BNFC.Abs.Item] where prt _ [] = concatD [] prt _ (x:xs) = concatD [prt 0 x, prt 0 xs] instance Print BNFC.Abs.Cat where prt i = \case BNFC.Abs.ListCat cat -> prPrec i 0 (concatD [doc (showString "["), prt 0 cat, doc (showString "]")]) BNFC.Abs.IdCat identifier -> prPrec i 0 (concatD [prt 0 identifier]) instance Print [BNFC.Abs.Cat] where prt _ [] = concatD [] prt _ [x] = concatD [prt 0 x] prt _ (x:xs) = concatD [prt 0 x, doc (showString ","), prt 0 xs] instance Print BNFC.Abs.Label where prt i = \case BNFC.Abs.Id identifier -> prPrec i 0 (concatD [prt 0 identifier]) BNFC.Abs.Wild -> prPrec i 0 (concatD [doc (showString "_")]) BNFC.Abs.ListE -> prPrec i 0 (concatD [doc (showString "["), doc (showString "]")]) BNFC.Abs.ListCons -> prPrec i 0 (concatD [doc (showString "("), doc (showString ":"), doc (showString ")")]) BNFC.Abs.ListOne -> prPrec i 0 (concatD [doc (showString "("), doc (showString ":"), doc (showString "["), doc (showString "]"), doc (showString ")")]) instance Print BNFC.Abs.Arg where prt i = \case BNFC.Abs.Arg identifier -> prPrec i 0 (concatD [prt 0 identifier]) instance Print [BNFC.Abs.Arg] where prt _ [] = concatD [] prt _ (x:xs) = concatD [prt 0 x, prt 0 xs] instance Print BNFC.Abs.Separation where prt i = \case BNFC.Abs.SepNone -> prPrec i 0 (concatD []) BNFC.Abs.SepTerm str -> prPrec i 0 (concatD [doc (showString "terminator"), printString str]) BNFC.Abs.SepSepar str -> prPrec i 0 (concatD [doc (showString "separator"), printString str]) instance Print [String] where prt _ [x] = concatD [printString x] prt _ (x:xs) = concatD [printString x, doc (showString ","), prt 0 xs] instance Print BNFC.Abs.Exp where prt i = \case BNFC.Abs.Cons exp1 exp2 -> prPrec i 0 (concatD [prt 1 exp1, doc (showString ":"), prt 0 exp2]) BNFC.Abs.App identifier exps -> prPrec i 1 (concatD [prt 0 identifier, prt 2 exps]) BNFC.Abs.Var identifier -> prPrec i 2 (concatD [prt 0 identifier]) BNFC.Abs.LitInt n -> prPrec i 2 (concatD [prt 0 n]) BNFC.Abs.LitChar c -> prPrec i 2 (concatD [prt 0 c]) BNFC.Abs.LitString str -> prPrec i 2 (concatD [printString str]) BNFC.Abs.LitDouble d -> prPrec i 2 (concatD [prt 0 d]) BNFC.Abs.List exps -> prPrec i 2 (concatD [doc (showString "["), prt 0 exps, doc (showString "]")]) instance Print [BNFC.Abs.Exp] where prt 2 [x] = concatD [prt 2 x] prt 2 (x:xs) = concatD [prt 2 x, prt 2 xs] prt _ [] = concatD [] prt _ [x] = concatD [prt 0 x] prt _ (x:xs) = concatD [prt 0 x, doc (showString ","), prt 0 xs] instance Print BNFC.Abs.RHS where prt i = \case BNFC.Abs.RHS items -> prPrec i 0 (concatD [prt 0 items]) instance Print [BNFC.Abs.RHS] where prt _ [x] = concatD [prt 0 x] prt _ (x:xs) = concatD [prt 0 x, doc (showString "|"), prt 0 xs] instance Print BNFC.Abs.MinimumSize where prt i = \case BNFC.Abs.MNonempty -> prPrec i 0 (concatD [doc (showString "nonempty")]) BNFC.Abs.MEmpty -> prPrec i 0 (concatD []) instance Print BNFC.Abs.Reg where prt i = \case BNFC.Abs.RAlt reg1 reg2 -> prPrec i 0 (concatD [prt 0 reg1, doc (showString "|"), prt 1 reg2]) BNFC.Abs.RMinus reg1 reg2 -> prPrec i 1 (concatD [prt 1 reg1, doc (showString "-"), prt 2 reg2]) BNFC.Abs.RSeq reg1 reg2 -> prPrec i 2 (concatD [prt 2 reg1, prt 3 reg2]) BNFC.Abs.RStar reg -> prPrec i 3 (concatD [prt 3 reg, doc (showString "*")]) BNFC.Abs.RPlus reg -> prPrec i 3 (concatD [prt 3 reg, doc (showString "+")]) BNFC.Abs.ROpt reg -> prPrec i 3 (concatD [prt 3 reg, doc (showString "?")]) BNFC.Abs.REps -> prPrec i 3 (concatD [doc (showString "eps")]) BNFC.Abs.RChar c -> prPrec i 3 (concatD [prt 0 c]) BNFC.Abs.RAlts str -> prPrec i 3 (concatD [doc (showString "["), printString str, doc (showString "]")]) BNFC.Abs.RSeqs str -> prPrec i 3 (concatD [doc (showString "{"), printString str, doc (showString "}")]) BNFC.Abs.RDigit -> prPrec i 3 (concatD [doc (showString "digit")]) BNFC.Abs.RLetter -> prPrec i 3 (concatD [doc (showString "letter")]) BNFC.Abs.RUpper -> prPrec i 3 (concatD [doc (showString "upper")]) BNFC.Abs.RLower -> prPrec i 3 (concatD [doc (showString "lower")]) BNFC.Abs.RAny -> prPrec i 3 (concatD [doc (showString "char")]) BNFC-2.9.5/src/BNFC/Regex.hs0000644000000000000000000002311007346545000013332 0ustar0000000000000000{-# LANGUAGE CPP #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE PatternGuards #-} -- | Tools to manipulate regular expressions. module BNFC.Regex ( nullable, simpReg ) where #if !MIN_VERSION_base(4,11,0) import Data.Semigroup (Semigroup(..)) #endif import Data.Set (Set) import qualified Data.Set as Set import qualified Data.List as List import BNFC.Abs -- | Check if a regular expression is nullable (accepts the empty string) nullable :: Reg -> Bool nullable = \case RSeq r1 r2 -> nullable r1 && nullable r2 RAlt r1 r2 -> nullable r1 || nullable r2 RMinus r1 r2 -> nullable r1 && not (nullable r2) RStar _ -> True RPlus r1 -> nullable r1 ROpt _ -> True REps -> True RChar _ -> False RAlts _ -> False RSeqs s -> null s RDigit -> False RLetter -> False RUpper -> False RLower -> False RAny -> False -- | Simplification of regular expression, mostly for the purpose -- of simplifying character alternatives (character classes). -- -- This may help lexer backends, since often lexer generators -- have a limited syntax for character classes. -- simpReg :: Reg -> Reg simpReg = rloop where rloop = rx . loop loop :: Reg -> RC loop = \case -- Definitely not character classes: -- RSeq r1 r2 -> Rx $ rloop r1 `rSeq` rloop r2 RStar r -> Rx $ rStar $ rloop r RPlus r -> Rx $ rPlus $ rloop r ROpt r -> Rx $ rOpt $ rloop r REps -> Rx $ REps RSeqs [] -> Rx $ REps RSeqs s@(_:_:_) -> Rx $ RSeqs s -- Possibly character classes: RSeq r1 r2 -> loop r1 `rcSeq` loop r2 RAlt r1 r2 -> loop r1 `rcAlt` loop r2 RMinus r1 r2 -> loop r1 `rcMinus` loop r2 -- Definitely character classes: RSeqs [c] -> CC $ cChar c RChar c -> CC $ cChar c RAlts s -> CC $ cAlts s RDigit -> CC $ cDigit RLetter -> CC $ cLetter RUpper -> CC $ cUpper RLower -> CC $ cLower RAny -> CC $ cAny -- | Character classes are regular expressions that recognize -- character sequences of length exactly one. These are often -- distinguished from arbitrary regular expressions in lexer -- generators, e.g. in @alex@. -- -- We represent character classes as a difference of unions of atomic -- character classes. data CharClass = CMinus { _ccYes, _ccNo :: CharClassUnion } deriving (Eq, Ord, Show) data CharClassUnion = CAny -- ^ Any character. | CAlt (Set CharClassAtom) -- ^ Any of the given (≥0) alternatives. deriving (Eq, Ord, Show) data CharClassAtom = CChar Char -- ^ A single character. | CDigit -- ^ @0-9@. | CLower -- ^ Lower case character. | CUpper -- ^ Upper case character. deriving (Eq, Ord, Show) -- -- | Regular expressions are constructed over character classes. -- -- -- -- We do not simplify at the level of regular expressions; -- -- this is left to the backend. -- data Regex -- = RxAlt Reg Reg -- | RxMinus Reg Reg -- | RxSeq Reg Reg -- | RxStar Reg -- | RxPlus Reg -- | RxOpt Reg -- | RxEps -- | RxChar CharClass -- deriving (Eq, Ord, Show) -- | A regular expression that might be a character class. data RC = Rx Reg | CC CharClass -- * Smart constructors for regular expressions. rSeq :: Reg -> Reg -> Reg rSeq = curry $ \case -- 0r = 0 (RAlts "", _ ) -> RAlts "" (_ , RAlts "") -> RAlts "" -- 1r = r (REps , r ) -> r (RSeqs "", r ) -> r (r , REps ) -> r (r , RSeqs "") -> r -- r*r* = r* (RStar r1, RStar r2) | r1 == r2 -> rStar r1 -- r+r* = r*r+ = r+ (RPlus r1, RStar r2) | r1 == r2 -> rPlus r1 (RStar r1, RPlus r2) | r1 == r2 -> rPlus r1 -- rr* = r*r = r+ (r1 , RStar r2) | r1 == r2 -> rPlus r1 (RStar r1, r2 ) | r1 == r2 -> rPlus r1 -- character sequences (RSeqs s1, RSeqs s2) -> RSeqs $ s1 ++ s2 (RChar c1, RSeqs s2) -> RSeqs $ c1 : s2 (RSeqs s1, RChar c2) -> RSeqs $ s1 ++ [c2] (RChar c1, RChar c2) -> RSeqs [ c1, c2 ] -- Associate to the left (r1 , RSeq r2 r3) -> (r1 `rSeq` r2) `rSeq` r3 -- general sequences (r1 , r2 ) -> r1 `RSeq` r2 rAlt :: Reg -> Reg -> Reg rAlt = curry $ \case -- 0 + r = r (RAlts "", r ) -> r -- r + 0 = r (r , RAlts "") -> r -- join character alternatives (RAlts s1, RAlts s2) -> RAlts $ s1 ++ s2 (RChar c1, RAlts s2) -> RAlts $ c1 : s2 (RAlts s1, RChar c2) -> RAlts $ s1 ++ [c2] (RChar c1, RChar c2) -> RAlts [ c1, c2 ] -- Associate to the left (r1 , RAlt r2 r3) -> (r1 `rAlt` r2) `rAlt` r3 -- general alternatives (r1, r2) | r1 == r2 -> r1 -- idempotency, but not the general case | otherwise -> r1 `RAlt` r2 rMinus :: Reg -> Reg -> Reg rMinus = curry $ \case -- 0 - r = 0 (RAlts "", _ ) -> RAlts "" -- r - 0 = r (r , RAlts "") -> r -- join character alternatives (RAlts s1, RAlts s2) -> case s1 List.\\ s2 of [c] -> RChar c s -> RAlts s (r1, r2) | r1 == r2 -> RAlts "" | otherwise -> r1 `RMinus` r2 rStar :: Reg -> Reg rStar = \case REps -> REps RSeqs "" -> REps RAlts "" -> REps ROpt r -> RStar r RStar r -> RStar r RPlus r -> RStar r r -> RStar r rPlus :: Reg -> Reg rPlus = \case REps -> REps RSeqs "" -> REps RAlts "" -> RAlts "" ROpt r -> RStar r RStar r -> RStar r RPlus r -> RPlus r r -> RPlus r rOpt :: Reg -> Reg rOpt = \case REps -> REps RSeqs "" -> REps RAlts "" -> REps RStar r -> RStar r RPlus r -> RStar r ROpt r -> ROpt r r -> ROpt r rcSeq :: RC -> RC -> RC rcSeq = curry $ \case (Rx REps , r ) -> r (Rx (RSeqs ""), r ) -> r (r , Rx REps ) -> r (r , Rx (RSeqs "")) -> r (r1 , r2 ) -> Rx $ rx r1 `rSeq` rx r2 rcAlt :: RC -> RC -> RC rcAlt = curry $ \case -- 0 + r = r + 0 = r (Rx (RAlts ""), r) -> r (r, Rx (RAlts "")) -> r -- other cases (CC c1, CC c2) -> c1 `cAlt` c2 (c1 , c2 ) -> Rx $ rx c1 `rAlt` rx c2 rcMinus :: RC -> RC -> RC rcMinus = curry $ \case -- r - 0 = r (r , Rx (RAlts "")) -> r (CC c1, CC c2 ) -> c1 `cMinus` c2 (c1 , c2 ) -> Rx $ rx c1 `rMinus` rx c2 class ToReg a where rx :: a -> Reg instance ToReg RC where rx (Rx r) = r rx (CC c) = rx c instance ToReg CharClass where rx (CMinus p m) | m == mempty = rx p | p == mempty = RAlts "" | otherwise = rx p `RMinus` rx m instance ToReg CharClassUnion where rx CAny = RAny rx (CAlt cs) = case rs of [] -> RAlts "" [r] -> r rs -> foldr1 RAlt rs where -- collect elements of cs into St start = St False False False "" step st = \case CChar c -> st { stAlts = c : stAlts st } CDigit -> st { stDigit = True } CLower -> st { stLower = True } CUpper -> st { stUpper = True } (St digit upper lower alts) = foldl step start $ Set.toDescList cs rs = concat [ [ RChar c | [c] <- [alts] ] , [ RAlts alts | (_:_:_) <- [alts] ] , [ RDigit | digit ] , [ RLetter | upper && lower ] , [ RUpper | upper && not lower ] , [ RLower | lower && not upper ] ] -- Local state type data St = St { stDigit, stUpper, stLower :: Bool, stAlts :: String } -- UNUSED: -- instance ToReg CharClassAtom where -- rx = \case -- CChar c -> RChar c -- CDigit -> RDigit -- CLower -> RLower -- CUpper -> RUpper -- * Constructors for character classes. -- | -- @(p1 \ m1) ∪ (p2 \ m2) = (p1 ∪ p2) \ (m1 ∪ m2)@ if @p1 ⊥ m2@ and @p2 ⊥ m1@ cAlt :: CharClass -> CharClass -> RC cAlt c1@(CMinus p1 m1) c2@(CMinus p2 m2) | c1 == cAny || c2 == cAny = CC cAny | p1 `ccuMinus` m2 == Right p1, p2 `ccuMinus` m1 == Right p2 = CC $ either id ccu $ (p1 <> p2) `ccuMinus` (m1 <> m2) | otherwise = Rx $ rx c1 `RAlt` rx c2 -- -- | ccuDisjoint p1 m2, ccuDisjoint p2 m1 = CC $ either id ccu $ (p1 <> p2) `ccuMinus` (m1 <> m2) -- -- | null m1, null m2 = CC $ ccu (p1 <> p2) -- | -- @(p1 \ m1) \ (0 \ m2) = p1 \ m1@ -- @(p1 \ m1) \ (p2 \ m2) = p1 \ (m1 ∪ p2)@ if @p1 \ m2 = p1@ cMinus :: CharClass -> CharClass -> RC cMinus c1@(CMinus p1 m1) c2@(CMinus p2 m2) | p2 == mempty = CC c1 | p1 `ccuMinus` m2 == Right p1 = CC $ either id ccu $ p1 `ccuMinus` (m1 <> p2) | otherwise = Rx $ rx c1 `RMinus` rx c2 cChar :: Char -> CharClass cChar c = cAlts [c] cAlts :: String -> CharClass cAlts cs = ccu $ CAlt $ Set.fromList $ map CChar cs cDigit, cLower, cUpper, cLetter, cAny :: CharClass cDigit = cAtom CDigit cLower = cAtom CLower cUpper = cAtom CUpper cLetter = ccu $ CAlt $ Set.fromList [ CLower, CUpper ] cAny = ccu CAny cAtom :: CharClassAtom -> CharClass cAtom = ccu . CAlt . Set.singleton ccu :: CharClassUnion -> CharClass ccu = (`CMinus` mempty) -- (A \ B) \ (C \ D) = A \ (B ∪ (C \ D)) -- | Mutually reduce: @(A - B) = (A \ B) - (B \ A)@ ccuMinus :: CharClassUnion -> CharClassUnion -> Either CharClass CharClassUnion ccuMinus = curry $ \case (_ , CAny) -> Right mempty (c1@CAny, c2 ) | c2 == mempty -> Right $ c1 | otherwise -> Left $ c1 `CMinus` c2 (CAlt cs1, CAlt cs2) | Set.null cs1' || Set.null cs2' -> Right $ CAlt cs1' | otherwise -> Left $ CAlt cs1' `CMinus` CAlt cs2' where cs1' = cs1 Set.\\ cs2 cs2' = cs2 Set.\\ cs1 instance Semigroup CharClassUnion where CAny <> _ = CAny _ <> CAny = CAny CAlt cs <> CAlt cs' = CAlt (cs <> cs') instance Monoid CharClassUnion where mempty = CAlt Set.empty mappend = (<>) BNFC-2.9.5/src/BNFC/TypeChecker.hs0000644000000000000000000001636207346545000014501 0ustar0000000000000000{-# LANGUAGE CPP #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE RecordWildCards #-} -- | Type checker for defined syntax constructors @define f xs = e@. module BNFC.TypeChecker ( -- * Type checker entry point runTypeChecker , checkDefinitions -- * Backdoor for rechecking defined syntax constructors for list types , checkDefinition' , buildSignature, buildContext, ctxTokens, isToken , ListConstructors(..) ) where import Control.Monad import Control.Monad.Except (MonadError(..)) import Control.Monad.Reader import Data.Bifunctor import Data.Char import Data.Either (partitionEithers) import qualified Data.Map as Map import qualified Data.Set as Set import BNFC.CF import BNFC.PrettyPrint -- * Error monad type TCError = WithPosition String -- | Type checking monad, reports errors. newtype Err a = Err { unErr :: ReaderT Position (Either TCError) a } deriving (Functor, Applicative, Monad, MonadReader Position) instance MonadError String Err where throwError msg = Err $ do pos <- ask throwError $ WithPosition pos msg catchError m h = Err $ do unErr m `catchError` \ (WithPosition _ msg) -> unErr (h msg) withPosition :: Position -> Err a -> Err a withPosition pos = local (const pos) runTypeChecker :: Err a -> Either String a runTypeChecker m = first blendInPosition $ unErr m `runReaderT` NoPosition -- * Types and context data Context = Ctx { ctxLabels :: Signature -- ^ Types of labels, extracted from rules. , ctxTokens :: [String] -- ^ User-defined token types. , ctxLocals :: Telescope -- ^ Types of local variables of a definition. } data ListConstructors = LC { nil :: Base -> (String, Type) -- ^ 'Base' is the element type. 'Type' the list type. , cons :: Base -> (String, Type) } dummyConstructors :: ListConstructors dummyConstructors = LC { nil = \ b -> ("[]" , FunT [] (ListT b)) , cons = \ b -> ("(:)", FunT [b, ListT b] (ListT b)) } -- * Type checker for definitions and expressions -- | Entry point. checkDefinitions :: CF -> Err CF checkDefinitions cf = do let ctx = buildContext cf let (pragmas, defs0) = partitionEithers $ map isFunDef $ cfgPragmas cf defs <- mapM (checkDefinition ctx) defs0 return cf { cfgPragmas = pragmas ++ map FunDef defs } checkDefinition :: Context -> Define -> Err Define checkDefinition ctx (Define f args e0 _) = do let xs = map fst args -- Throw away dummy types. (tel, (e, b)) <- checkDefinition' dummyConstructors ctx f xs e0 return $ Define f tel e b checkDefinition' :: ListConstructors -- ^ Translation of the list constructors. -> Context -- ^ Signature (types of labels). -> RFun -- ^ Function name. -> [String] -- ^ Function arguments. -> Exp -- ^ Function body. -> Err (Telescope, (Exp, Base)) -- ^ Typed arguments, translated body, type of body. checkDefinition' list ctx ident xs e = withPosition (wpPosition ident) $ do unless (isLower $ head f) $ throwError $ "Defined functions must start with a lowercase letter." t@(FunT ts t') <- lookupCtx f ctx `catchError` \_ -> throwError $ "'" ++ f ++ "' must be used in a rule." let expect = length ts given = length xs unless (expect == given) $ throwError $ concat [ "'", f, "' is used with type ", show t , " but defined with ", show given, " argument", plural given ++ "." ] e' <- checkExp list (setLocals ctx $ zip xs ts) e t' return (zip xs ts, (e', t')) `catchError` \ err -> throwError $ "In the definition " ++ unwords (f : xs ++ ["=", prettyShow e, ";"]) ++ "\n " ++ err where f = wpThing ident plural 1 = "" plural _ = "s" checkExp :: ListConstructors -> Context -> Exp -> Base -> Err Exp checkExp list ctx = curry $ \case (App "[]" _ [] , ListT t ) -> return (uncurry App (nil list t) []) (App "[]" _ _ , _ ) -> throwError $ "[] is applied to too many arguments." (App "(:)" _ [e,es], ListT t ) -> do e' <- checkExp list ctx e t es' <- checkExp list ctx es (ListT t) return $ uncurry App (cons list t) [e',es'] (App "(:)" _ es , _ ) -> throwError $ "(:) takes 2 arguments, but has been given " ++ show (length es) ++ "." (e@(App x _ es) , t ) -> checkApp e x es t (e@(Var x) , t ) -> e <$ checkApp e x [] t (e@LitInt{} , BaseT "Integer") -> return e (e@LitDouble{} , BaseT "Double" ) -> return e (e@LitChar{} , BaseT "Char" ) -> return e (e@LitString{} , BaseT "String" ) -> return e (e , t ) -> throwError $ prettyShow e ++ " does not have type " ++ show t ++ "." where checkApp e x es t = do ft@(FunT ts t') <- lookupCtx x ctx es' <- matchArgs ts unless (t == t') $ throwError $ prettyShow e ++ " has type " ++ show t' ++ ", but something of type " ++ show t ++ " was expected." return $ App x ft es' where matchArgs ts | expect /= given = throwError $ "'" ++ x ++ "' takes " ++ show expect ++ " arguments, but has been given " ++ show given ++ "." | otherwise = zipWithM (checkExp list ctx) es ts where expect = length ts given = length es -- * Context handling -- | Create context containing the types of all labels, -- computed from the rules. -- -- Fail if a label is used at different types. -- buildSignature :: [Rule] -> Err Signature buildSignature rules = do -- Build label signature with duplicates let sig0 = Map.fromListWith mappend $ map (second Set.singleton) labels -- Check for duplicates; extract from singleton sets. sig <- forM (Map.toAscList sig0) $ \ (f,ts) -> case Set.toList ts of [] -> undefined -- impossible [t] -> return (f,t) ts' -> throwError $ unlines $ concat [ [ "The label '" ++ f ++ "' is used at conflicting types:" ] , map ((" " ++) . blendInPosition . fmap show) ts' ] return $ Map.fromAscList sig where mkType cat args = FunT [ mkBase t | Left t <- args ] (mkBase cat) mkBase t | isList t = ListT $ mkBase $ normCatOfList t | otherwise = BaseT $ catToStr $ normCat t labels = [ (x, WithPosition pos $ mkType (wpThing cat) args) | Rule f@(WithPosition pos x) cat args _ <- rules , not (isCoercion f) , not (isNilCons f) ] buildContext :: CF -> Context buildContext cf = Ctx { ctxLabels = cfgSignature cf , ctxTokens = ("Ident" : tokenNames cf) , ctxLocals = [] } isToken :: String -> Context -> Bool isToken x ctx = elem x $ ctxTokens ctx setLocals :: Context -> [(String,Base)] -> Context setLocals ctx xs = ctx { ctxLocals = xs } lookupCtx :: String -> Context -> Err Type lookupCtx x ctx | isToken x ctx = return $ FunT [BaseT "String"] (BaseT x) | otherwise = do case lookup x $ ctxLocals ctx of Just b -> return $ FunT [] b Nothing -> do case Map.lookup x $ ctxLabels ctx of Nothing -> throwError $ "Undefined symbol '" ++ x ++ "'." Just t -> return $ wpThing t BNFC-2.9.5/src/BNFC/Utils.hs0000644000000000000000000004060307346545000013366 0ustar0000000000000000{-# LANGUAGE CPP #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE PatternSynonyms #-} {-# OPTIONS_GHC -fno-warn-orphans #-} -- ghc 7.10 module BNFC.Utils ( ModuleName , when, unless, unlessNull, unlessNull' , applyWhen, applyUnless , for, whenJust , caseMaybe, (>.>) , curry3, uncurry3 , singleton, headWithDefault, mapHead, spanEnd , duplicatesOn, groupOn, uniqOn , hasNumericSuffix , (+++), (++++), (+-+), (+.+), parensIf , pad, table , mkName, mkNames, NameStyle(..) , capitalize , lowerCase, upperCase, mixedCase , camelCase, camelCase_ , snakeCase, snakeCase_ , replace , writeFileRep , cstring , getZonedTimeTruncatedToSeconds , symbolToName ) where import Control.Arrow ((&&&)) import Control.DeepSeq (rnf) import Data.Char import Data.List (intercalate, transpose) import Data.List.NonEmpty (pattern (:|)) import Data.Map (Map) #if !MIN_VERSION_base(4,11,0) import Data.Semigroup (Semigroup(..)) #endif import Data.Time import qualified Data.Foldable as Fold import qualified Data.Map as Map import qualified Data.List.NonEmpty as List1 import System.IO (IOMode(ReadMode),hClose,hGetContents,openFile) import System.IO.Error (tryIOError) import BNFC.PrettyPrint (Doc, text) type List1 = List1.NonEmpty -- | The name of a module, e.g. "Foo.Abs", "Foo.Print" etc. type ModuleName = String -- * Control flow. -- ghc 7.10 misses the instance Monoid a => Monoid (IO a) #if __GLASGOW_HASKELL__ <= 710 instance {-# OVERLAPPING #-} Semigroup (IO ()) where (<>) = (>>) instance {-# OVERLAPPING #-} Monoid (IO ()) where mempty = return () mappend = (<>) mconcat = sequence_ #endif -- | Generalization of 'Control.Monad.when'. when :: Monoid m => Bool -> m -> m when True m = m when False _ = mempty -- | Generalization of 'Control.Monad.unless'. unless :: Monoid m => Bool -> m -> m unless False m = m unless True _ = mempty -- | 'when' for the monoid of endofunctions 'a -> a'. applyWhen :: Bool -> (a -> a) -> a -> a applyWhen True f = f applyWhen False _ = id -- | 'unless' for the monoid of endofunctions 'a -> a'. applyUnless :: Bool -> (a -> a) -> a -> a applyUnless False f = f applyUnless True _ = id -- | Invoke continuation for non-empty list. unlessNull :: Monoid m => [a] -> ([a] -> m) -> m unlessNull l k = case l of [] -> mempty as -> k as -- | Invoke continuation for non-empty list. unlessNull' :: Monoid m => [a] -> (a -> [a] -> m) -> m unlessNull' l k = case l of [] -> mempty (a:as) -> k a as -- * Flipped versions of standard functions. infixr 8 >.> -- | Diagrammatic composition. (>.>) :: (a -> b) -> (b -> c) -> a -> c g >.> f = f . g -- | Non-monadic 'forM'. for :: [a] -> (a -> b) -> [b] for = flip map -- | Generalization of 'forM' to 'Monoid'. whenJust :: Monoid m => Maybe a -> (a -> m) -> m whenJust = flip foldMap -- | Rotation of 'maybe'. caseMaybe :: Maybe a -> b -> (a -> b) -> b caseMaybe ma b f = maybe b f ma -- * Tuple utilities. -- From https://hackage.haskell.org/package/extra-1.6.18/docs/Data-Tuple-Extra.html -- | Converts an uncurried function to a curried function. curry3 :: ((a, b, c) -> d) -> a -> b -> c -> d curry3 f a b c = f (a,b,c) -- | Converts a curried function to a function on a triple. uncurry3 :: (a -> b -> c -> d) -> ((a, b, c) -> d) uncurry3 f ~(a,b,c) = f a b c -- * String operations for printing. infixr 5 +++, ++++, +-+, +.+ -- | Concatenate strings by a space. (+++) :: String -> String -> String a +++ b = a ++ " " ++ b -- | Concatenate strings by a newline. (++++) :: String -> String -> String a ++++ b = a ++ "\n" ++ b -- | Concatenate strings by an underscore. (+-+) :: String -> String -> String a +-+ b = a ++ "_" ++ b -- | Concatenate strings by a dot. (+.+) :: String -> String -> String a +.+ b = a ++ "." ++ b -- | Wrap in parentheses if condition holds. parensIf :: Bool -> String -> String parensIf = \case True -> ("(" ++) . (++ ")") False -> id -- | Pad a string on the right by spaces to reach the desired length. pad :: Int -> String -> String pad n s = s ++ drop (length s) (replicate n ' ') -- | Make a list of rows with left-aligned columns from a matrix. table :: String -> [[String]] -> [String] table sep m = map (intercalate sep . zipWith pad widths) m where -- Column widths. widths :: [Int] widths = map maximum $ transpose $ map (map length) m -- * List utilities -- | Give a telling name to the electric monkey. singleton :: a -> [a] singleton = (:[]) -- | Get the first element of a list, fallback for empty list. headWithDefault :: a -> [a] -> a headWithDefault a [] = a headWithDefault _ (a:_) = a -- | Apply a function to the head of a list. mapHead :: (a -> a) -> [a] -> [a] mapHead f = \case [] -> [] a:as -> f a : as -- | @spanEnd p l == reverse (span p (reverse l))@. -- -- Invariant: @l == front ++ end where (end, front) = spanEnd p l@ -- -- (From package ghc, module Util.) spanEnd :: (a -> Bool) -> [a] -> ([a], [a]) spanEnd p l = go l [] [] l where go yes _ rev_no [] = (yes, reverse rev_no) go yes rev_yes rev_no (x:xs) | p x = go yes (x : rev_yes) rev_no xs | otherwise = go xs [] (x : rev_yes ++ rev_no) xs -- | Replace all occurences of a value by another value replace :: Eq a => a -- ^ Value to replace -> a -- ^ Value to replace it with -> [a] -> [a] replace x y xs = [ if z == x then y else z | z <- xs] -- | Returns lists of elements whose normal form appears more than once. -- -- >>> duplicatesOn id [5,1,2,5,1] -- [1 :| [1],5 :| [5]] -- >>> duplicatesOn abs [5,-5,1] -- [5 :| [-5]] duplicatesOn :: (Foldable t, Ord b) => (a -> b) -> t a -> [List1 a] duplicatesOn nf -- Keep groups of size >= 2. = filter ((2 <=) . List1.length) -- Turn into a list of listss: elements grouped by their normal form. . Map.elems -- Partition elements by their normal form. . Fold.foldr (\ a -> Map.insertWith (<>) (nf a) (a :| [])) Map.empty -- | Group consecutive elements that have the same normalform. groupOn :: Eq b => (a -> b) -> [a] -> [List1 a] groupOn nf = loop where loop = \case [] -> [] a:xs | let (as, rest) = span ((nf a ==) . nf) xs -> (a :| as) : loop rest -- | Keep only the first of consecutive elements that have the same normalform. uniqOn :: Eq b => (a -> b) -> [a] -> [a] uniqOn nf = map List1.head . groupOn nf -- | Get a numeric suffix if it exists. -- -- >>> hasNumericSuffix "hello world" -- Nothing -- >>> hasNumericSuffix "a1b2" -- Just ("a1b",2) -- >>> hasNumericSuffix "1234" -- Just ("",1234) hasNumericSuffix :: String -> Maybe (String, Integer) hasNumericSuffix s = case spanEnd isDigit s of ([], _) -> Nothing (num, front) -> Just (front, read num) -- * Time utilities -- | Cut away fractions of a second in time. truncateZonedTimeToSeconds :: ZonedTime -> ZonedTime truncateZonedTimeToSeconds (ZonedTime (LocalTime day (TimeOfDay h m s)) zone) = ZonedTime (LocalTime day (TimeOfDay h m $ fromIntegral sec)) zone where sec :: Int sec = truncate s getZonedTimeTruncatedToSeconds :: IO ZonedTime getZonedTimeTruncatedToSeconds = truncateZonedTimeToSeconds <$> getZonedTime -- * File utilities -- | Write a file, after making a backup of an existing file with the same name. -- If an old version of the file exist and the new version is the same, -- keep the old file and don't create a .bak file. -- / New version by TH, 2010-09-23 writeFileRep :: FilePath -> String -> IO () writeFileRep path s = either newFile updateFile =<< tryIOError (readFile' path) where -- Case: file does not exist yet. newFile _ = do putStrLn $ "writing new file " ++ path writeFile path s -- Case: file exists with content @old@. updateFile old = do -- Write new content. writeFile path s if s == old -- test is O(1) space, O(n) time then do putStrLn $ "refreshing unchanged file " ++ path else do let bak = path ++ ".bak" putStrLn $ "writing file " ++ path ++ " (saving old file as " ++ bak ++ ")" writeFile bak old -- Force reading of contents of files to achieve compatibility with -- Windows IO handling, as combining lazy IO with `readFile` and -- 2x `renameFile` on the open `path` file complains with: -- -- "bnfc.exe: Makefile: MoveFileEx "Makefile" "Makefile.bak": permission -- denied (The process cannot access the file because it is being used -- by another process.)" readFile' :: FilePath -> IO String readFile' path' = do inFile <- openFile path' ReadMode contents <- hGetContents inFile rnf contents `seq` hClose inFile return contents -- *** Naming *** -- Because naming is hard (http://blog.codinghorror.com/i-shall-call-it-somethingmanager/) -- | Different case style data NameStyle = LowerCase -- ^ e.g. @lowercase@ | UpperCase -- ^ e.g. @UPPERCASE@ | SnakeCase -- ^ e.g. @snake_case@ | CamelCase -- ^ e.g. @CamelCase@ | MixedCase -- ^ e.g. @mixedCase@ | OrigCase -- ^ Keep original capitalization and form. deriving (Show, Eq) -- | Generate a name in the given case style taking into account the reserved -- word of the language. Note that despite the fact that those name are mainly -- to be used in code rendering (type Doc), we return a String here to allow -- further manipulation of the name (like disambiguation) which is not possible -- in the Doc type. -- -- Examples: -- -- >>> mkName [] LowerCase "FooBAR" -- "foobar" -- -- >>> mkName [] UpperCase "FooBAR" -- "FOOBAR" -- -- >>> mkName [] SnakeCase "FooBAR" -- "foo_bar" -- -- >>> mkName [] CamelCase "FooBAR" -- "FooBAR" -- -- >>> mkName [] CamelCase "Foo_bar" -- "FooBar" -- -- >>> mkName [] MixedCase "FooBAR" -- "fooBAR" -- -- >>> mkName ["foobar"] LowerCase "FooBAR" -- "foobar_" -- -- >>> mkName ["foobar", "foobar_"] LowerCase "FooBAR" -- "foobar__" mkName :: [String] -> NameStyle -> String -> String mkName reserved style s = notReserved name' where notReserved name | name `elem` reserved = notReserved (name ++ "_") | otherwise = name tokens = parseIdent s name' = case style of LowerCase -> map toLower $ concat tokens UpperCase -> map toUpper $ concat tokens CamelCase -> concatMap capitalize tokens MixedCase -> mapHead toLower $ concatMap capitalize tokens SnakeCase -> map toLower $ intercalate "_" tokens OrigCase -> s -- | Make first letter uppercase. -- capitalize :: String -> String capitalize = mapHead toUpper -- | Same as above but accept a list as argument and make sure that the -- names generated are uniques. -- -- >>> mkNames ["c"] LowerCase ["A", "b_", "a_", "c"] -- ["a1","b","a2","c_"] mkNames :: [String] -> NameStyle -> [String] -> [String] mkNames reserved style = disambiguateNames . map (mkName reserved style) -- | This one takes a list of names and makes sure each is unique, appending -- numerical suffix if needed. -- -- >>> disambiguateNames ["a", "b", "a", "c"] -- ["a1","b","a2","c"] disambiguateNames :: [String] -> [String] disambiguateNames = disamb [] where disamb ns1 (n:ns2) | n `elem` (ns1 ++ ns2) = let i = length (filter (==n) ns1) + 1 in (n ++ show i) : disamb (n:ns1) ns2 | otherwise = n : disamb (n:ns1) ns2 disamb _ [] = [] -- | Heuristic to "parse" an identifier into separate components. -- -- >>> parseIdent "abc" -- ["abc"] -- -- >>> parseIdent "Abc123" -- ["Abc"] -- -- >>> parseIdent "WhySoSerious" -- ["Why","So","Serious"] -- -- >>> parseIdent "why_so_serious_123" -- ["why","so","serious"] -- -- >>> parseIdent "why-so-serious123" -- ["why","so","serious"] -- -- Some corner cases: -- -- >>> parseIdent "LBNFParser" -- ["LBNF","Parser"] -- -- >>> parseIdent "aLBNFParser" -- ["a","LBNF","Parser"] -- -- >>> parseIdent "ILoveNY" -- ["I","Love","NY"] parseIdent :: String -> [String] parseIdent = p [] . map (classify &&& id) where classify :: Char -> CharClass classify c | isUpper c = U | isLower c = L | otherwise = O p :: String -> [(CharClass,Char)] -> [String] -- Done: p acc [] = emit acc [] -- Continue if consecutive characters have same case. p acc ((L,c) : cs@((L,_) : _)) = p (c:acc) cs p acc ((U,c) : cs@((U,_) : _)) = p (c:acc) cs -- Break if consecutive characters have different case. p acc ((U,c) : cs@((L,_) : _)) = emit acc $ p [c] cs p acc ((L,c) : cs@((U,_) : _)) = emit (c:acc) $ p [] cs -- Discard "other" characters, and break to next component. p acc ((U,c) : (O,_) : cs) = emit (c:acc) $ p [] cs p acc ((L,c) : (O,_) : cs) = emit (c:acc) $ p [] cs p acc ((O,_) : cs) = emit acc $ p [] cs p acc [(_,c)] = p (c:acc) [] emit :: String -> [String] -> [String] emit [] rest = rest emit acc rest = reverse acc : rest data CharClass = U | L | O -- | Ident to lower case. -- >>> lowerCase "MyIdent" -- myident lowerCase :: String -> Doc lowerCase = text . mkName [] LowerCase -- | Ident to upper case. -- >>> upperCase "MyIdent" -- MYIDENT upperCase :: String -> Doc upperCase = text . mkName [] UpperCase -- | Ident to camel case. -- >>> camelCase "my_ident" -- MyIdent camelCase :: String -> Doc camelCase = text . camelCase_ camelCase_ :: String -> String camelCase_ = mkName [] CamelCase -- | To mixed case. -- >>> mixedCase "my_ident" -- myIdent mixedCase :: String -> Doc mixedCase = text . mkName [] MixedCase -- | To snake case. -- >>> snakeCase "MyIdent" -- my_ident snakeCase :: String -> Doc snakeCase = text . snakeCase_ snakeCase_ :: String -> String snakeCase_ = mkName [] SnakeCase -- ESCAPING -- | A function that renders a c-like string with escaped characters. -- Note that although it's called cstring, this can be used with most (all) -- backend as they seem to mostly share escaping conventions. -- The c in the name is barely an homage for C being the oldest language in -- the lot. -- -- >>> cstring "foobar" -- "foobar" -- -- >>> cstring "foobar\"" -- "foobar\"" cstring :: String -> Doc cstring = text . show -- * Symbols -- | Print a symbol as typical token name, like "(" as "LPAREN". symbolToName :: String -> Maybe String symbolToName = (`Map.lookup` symbolTokenMap) -- | Map from symbol to token name. symbolTokenMap :: Map String String symbolTokenMap = Map.fromList symbolTokenList -- | Association list from symbol to token name. symbolTokenList :: [(String, String)] symbolTokenList = [ ("{" , "LBRACE") , ("}" , "RBRACE") , ("(" , "LPAREN") , (")" , "RPAREN") , ("[" , "LBRACK") , ("]" , "RBRACK") , ("[]" , "EMPTYBRACK") , ("." , "DOT") , (".." , "DDOT") , ("...", "ELLIPSIS") , ("," , "COMMA") , (",," , "DCOMMA") , (";" , "SEMI") , (";;" , "DSEMI") , (":" , "COLON") , ("::" , "DCOLON") , (":=" , "COLONEQ") , ("::=", "DCOLONEQ") , (":-" , "COLONMINUS") , ("::-", "DCOLONMINUS") , (":--", "COLONDMINUS") , ("+" , "PLUS") , ("++" , "DPLUS") , ("+=" , "PLUSEQ") , ("+-" , "PLUSMINUS") , ("-" , "MINUS") , ("--" , "DMINUS") , ("-=" , "MINUSEQ") , ("-+" , "MINUSPLUS") , ("-*" , "MINUSSTAR") , ("*" , "STAR") , ("**" , "DSTAR") , ("*=" , "STAREQ") , ("*-" , "STARMINUS") , ("/" , "SLASH") , ("//" , "DSLASH") , ("/=" , "SLASHEQ") , ("\\" , "BACKSLASH") , ("\\\\","DBACKSLASH") , ("/\\", "WEDGE") , ("\\/", "VEE") , ("&" , "AMP") , ("&&" , "DAMP") , ("&=" , "AMPEQ") , ("|" , "BAR") , ("||" , "DBAR") , ("|=" , "BAREQ") , ("<" , "LT") , ("<<" , "DLT") , ("<<<", "TLT") , ("<=" , "LTEQ") , ("<<=", "DLTEQ") , ("<<<=","TLTEQ") , (">" , "GT") , (">>" , "DGT") , (">>>", "TGT") , (">=" , "GTEQ") , (">>=", "DGTEQ") , (">>>=","TGTEQ") , ("<>" , "LTGT") , ("=" , "EQ") , ("==" , "DEQ") , ("_" , "UNDERSCORE") , ("!" , "BANG") , ("!=" , "BANGEQ") , ("?" , "QUESTION") , ("?=" , "QUESTIONEQ") , ("#" , "HASH") , ("##" , "DHASH") , ("###", "THASH") , ("@" , "AT") , ("@@" , "DAT") , ("@=" , "ATEQ") , ("$" , "DOLLAR") , ("$$" , "DDOLLAR") , ("%" , "PERCENT") , ("%%" , "DPERCENT") , ("%=" , "PERCENTEQ") , ("^" , "CARET") , ("^^" , "DCARET") , ("^=" , "CARETEQ") , ("~" , "TILDE") , ("~~" , "DTILDE") , ("~=" , "TILDEEQ") , ("'" , "APOSTROPHE") , ("''" , "DAPOSTROPHE") , ("'''", "TAPOSTROPHE") , ("<-" , "LARROW") , ("->" , "RARROW") , ("<=" , "LDARROW") , ("=>" , "RDARROW") , ("|->", "MAPSTO") ] BNFC-2.9.5/src/Makefile0000644000000000000000000000235507346545000012664 0ustar0000000000000000# Author: Andreas Abel, 2018-01-01, 2020-12-22 .PHONY : default all default: lbnf all: lbnf license # bnfc's own parser ################### # BNFC-generated files gen = BNFC/Abs.hs BNFC/Doc.txt ErrM.hs BNFC/Lex.x BNFC/Par.y BNFC/Print.hs BNFC/Skel.hs BNFC/Test.hs .PHONY: lbnf lbnf : BNFC/Abs.hs BNFC/Lex.hs BNFC/Par.hs BNFC/Print.hs %/Abs.hs %/Doc.txt %/Lex.x %/Par.y %/Print.hs %/Skel.hs %/Test.hs : %.cf bnfc --haskell -d $< %.hs : %.x alex -g $< %.hs : %.y happy -gcai $< # Generate License.hs ##################### .PHONY: license license: BNFC/License.hs BNFC/License.hs : ../LICENSE Makefile echo "\ -- This file was autogenerated from LICENSE, do not edit!!\n\n\ {-# LANGUAGE QuasiQuotes #-}\n\n\ module BNFC.License where\n\n\ import Data.String.QQ\n\n\ license :: String\n\ license = [s|" > $@ cat $< >> $@ echo "\n|]" >> $@ # Emacs Tags ############ .PHONY : tags tags : hasktags -e . # Fake cabal file for old version of haskell-mode (emacs) ######################################################### .PHONY: fake-cabal rm-fake-cabal fake-cabal : touch inferior-haskell-find-project-root.cabal # Remove this this file confuses tools like cabal and stack. rm-fake-cabal : rm inferior-haskell-find-project-root.cabal #EOF BNFC-2.9.5/stack-8.10.7.yaml0000644000000000000000000000010507346545000013146 0ustar0000000000000000resolver: lts-18.28 compiler: ghc-8.10.7 compiler-check: match-exact BNFC-2.9.5/stack-8.2.2.yaml0000644000000000000000000000002407346545000013062 0ustar0000000000000000resolver: lts-11.22 BNFC-2.9.5/stack-8.4.4.yaml0000644000000000000000000000002407346545000013066 0ustar0000000000000000resolver: lts-12.26 BNFC-2.9.5/stack-8.6.5.yaml0000644000000000000000000000002407346545000013071 0ustar0000000000000000resolver: lts-14.27 BNFC-2.9.5/stack-8.8.4.yaml0000644000000000000000000000010407346545000013071 0ustar0000000000000000resolver: lts-16.31 compiler: ghc-8.8.4 compiler-check: match-exact BNFC-2.9.5/stack-9.0.2.yaml0000644000000000000000000000010407346545000013060 0ustar0000000000000000resolver: lts-19.33 compiler: ghc-9.0.2 compiler-check: match-exact BNFC-2.9.5/stack-9.2.8.yaml0000644000000000000000000000010407346545000013070 0ustar0000000000000000resolver: lts-20.26 compiler: ghc-9.2.8 compiler-check: match-exact BNFC-2.9.5/stack-9.4.5.yaml0000644000000000000000000000010307346545000013066 0ustar0000000000000000resolver: lts-21.0 compiler: ghc-9.4.5 compiler-check: match-exact BNFC-2.9.5/stack-9.6.2.yaml0000644000000000000000000000011507346545000013070 0ustar0000000000000000resolver: nightly-2023-06-26 compiler: ghc-9.6.2 compiler-check: match-exact BNFC-2.9.5/test/BNFC/Backend/0000755000000000000000000000000007346545000013446 5ustar0000000000000000BNFC-2.9.5/test/BNFC/Backend/BaseSpec.hs0000644000000000000000000000420207346545000015465 0ustar0000000000000000{-# LANGUAGE FlexibleInstances, UndecidableInstances #-} {-# LANGUAGE ExtendedDefaultRules #-} module BNFC.Backend.BaseSpec where import Data.List (isInfixOf) import System.Directory import System.IO.Temp (withSystemTempDirectory) import Test.Hspec import BNFC.Backend.Base -- SUT default(String) spec :: Spec spec = do -- -- Andreas, 2021-07-17 -- -- I don't really believe in these unit tests; important are system tests. -- -- So I am not putting in much energy to maintain them. describe "Backend monad" $ do it "empty computation generates empty list of files" $ execBackend (return ()) `shouldReturn` [] -- -- Test broken: mkfile also puts the BNFC signature containing the version number. -- it "returns the file created using mkfile" $ -- execBackend (mkfile "test.txt" "abcd") -- `shouldReturn` [("test.txt", "abcd\n")] describe "writeFiles" $ do it "creates the root directory if it doesn't exists" $ withSystemTempDirectory "bnfc-test" $ \tmpdir -> do setCurrentDirectory tmpdir writeFiles "foo/bar" (return ()) doesDirectoryExist "foo/bar" `shouldReturn` True it "creates a file from the bucket" $ withSystemTempDirectory "bnfc-test" $ \tmpdir -> do setCurrentDirectory tmpdir writeFiles "." (mkfile "file.txt" id "") doesFileExist "file.txt" `shouldReturn` True it "put the right content in the file" $ withSystemTempDirectory "bnfc-test" $ \tmpdir -> do setCurrentDirectory tmpdir writeFiles "." (mkfile "file.txt" id "abcd") readFile "file.txt" >>= (`shouldSatisfy` isInfixOf "abcd\n") it "creates subdirectories" $ withSystemTempDirectory "bnfc-test" $ \tmpdir -> do setCurrentDirectory tmpdir writeFiles "." (mkfile "subdir/file.txt" id "abcd") doesDirectoryExist "subdir" `shouldReturn` True it "creates files in the root directory" $ withSystemTempDirectory "bnfc-test" $ \tmpdir -> do setCurrentDirectory tmpdir writeFiles "root/" (mkfile "foo/bar.txt" id "abcd") doesFileExist "root/foo/bar.txt" `shouldReturn` True BNFC-2.9.5/test/BNFC/Backend/CPP/0000755000000000000000000000000007346545000014070 5ustar0000000000000000BNFC-2.9.5/test/BNFC/Backend/CPP/NoSTLSpec.hs0000644000000000000000000000133007346545000016173 0ustar0000000000000000module BNFC.Backend.CPP.NoSTLSpec where import BNFC.Options import BNFC.GetCF import Test.Hspec import BNFC.Hspec import BNFC.Backend.CPP.NoSTL -- SUT calcOptions = defaultOptions { lang = "Calc" } getCalc = parseCF calcOptions TargetCpp $ unlines [ "EAdd. Exp ::= Exp \"+\" Exp1 ;" , "ESub. Exp ::= Exp \"-\" Exp1 ;" , "EMul. Exp1 ::= Exp1 \"*\" Exp2 ;" , "EDiv. Exp1 ::= Exp1 \"/\" Exp2 ;" , "EInt. Exp2 ::= Integer ;" , "coercions Exp 2 ;" ] spec = describe "C backend" $ it "respect the makefile option" $ do calc <- getCalc let opts = calcOptions { optMake = Just "MyMakefile" } makeCppNoStl opts calc `shouldGenerate` "MyMakefile" BNFC-2.9.5/test/BNFC/Backend/CPP/STLSpec.hs0000644000000000000000000000132207346545000015677 0ustar0000000000000000module BNFC.Backend.CPP.STLSpec where import BNFC.Options import BNFC.GetCF import Test.Hspec import BNFC.Hspec import BNFC.Backend.CPP.STL -- SUT calcOptions = defaultOptions { lang = "Calc" } getCalc = parseCF calcOptions TargetCpp $ unlines [ "EAdd. Exp ::= Exp \"+\" Exp1 ;" , "ESub. Exp ::= Exp \"-\" Exp1 ;" , "EMul. Exp1 ::= Exp1 \"*\" Exp2 ;" , "EDiv. Exp1 ::= Exp1 \"/\" Exp2 ;" , "EInt. Exp2 ::= Integer ;" , "coercions Exp 2 ;" ] spec = describe "C backend" $ it "respect the makefile option" $ do calc <- getCalc let opts = calcOptions { optMake = Just "MyMakefile" } makeCppStl opts calc `shouldGenerate` "MyMakefile" BNFC-2.9.5/test/BNFC/Backend/CSpec.hs0000644000000000000000000000130507346545000014776 0ustar0000000000000000module BNFC.Backend.CSpec where import BNFC.Options import BNFC.GetCF import Test.Hspec import BNFC.Hspec import BNFC.Backend.C -- SUT calcOptions = defaultOptions { lang = "Calc" } getCalc = parseCF calcOptions TargetHaskell $ unlines [ "EAdd. Exp ::= Exp \"+\" Exp1 ;" , "ESub. Exp ::= Exp \"-\" Exp1 ;" , "EMul. Exp1 ::= Exp1 \"*\" Exp2 ;" , "EDiv. Exp1 ::= Exp1 \"/\" Exp2 ;" , "EInt. Exp2 ::= Integer ;" , "coercions Exp 2 ;" ] spec = describe "C backend" $ it "respect the makefile option" $ do calc <- getCalc let opts = calcOptions { optMake = Just "MyMakefile" } makeC opts calc `shouldGenerate` "MyMakefile" BNFC-2.9.5/test/BNFC/Backend/Common/0000755000000000000000000000000007346545000014676 5ustar0000000000000000BNFC-2.9.5/test/BNFC/Backend/Common/MakefileSpec.hs0000644000000000000000000000112007346545000017554 0ustar0000000000000000{-# LANGUAGE LambdaCase #-} module BNFC.Backend.Common.MakefileSpec where import Test.Hspec import BNFC.Backend.Base (GeneratedFile(..), execBackend) import BNFC.Options (defaultOptions,optMake) import BNFC.Backend.Common.Makefile -- SUT spec :: Spec spec = do describe "mkMakefile" $ do it "uses the names in the options dictionary" $ let opts = defaultOptions { optMake = Just "MyMakefile" } in execBackend (mkMakefile (optMake opts) (const "")) >>= (`shouldSatisfy` \case [ GeneratedFile "MyMakefile" _ "" ] -> True _ -> False ) BNFC-2.9.5/test/BNFC/Backend/Common/NamedVariablesSpec.hs0000644000000000000000000000154507346545000020727 0ustar0000000000000000module BNFC.Backend.Common.NamedVariablesSpec where import Control.Monad (liftM) import Test.Hspec import Test.QuickCheck import BNFC.CF (Cat(..),isList) import BNFC.Backend.Common.NamedVariables -- SUT genCat:: Gen Cat genCat = frequency [(10,simpleCat), (1,listCat)] where simpleCat = elements [Cat "Cat1", Cat "Cat2", Cat "Cat3"] listCat = liftM ListCat simpleCat spec :: Spec spec = do describe "getVars" $ do it "returns a list of the same length as the input list" $ forAll (listOf genCat) $ \l -> length l == length (getVars l) it "leaves the name of the (non list) category untouched" $ forAll (listOf genCat) $ \l -> all (not.isList) l ==> map show l == map fst (getVars l) it "give the output described in the example" $ getVars [Cat "A", Cat "B", Cat "A"] `shouldBe` [("A", 1), ("B", 0), ("A", 2)] BNFC-2.9.5/test/BNFC/Backend/Haskell/0000755000000000000000000000000007346545000015031 5ustar0000000000000000BNFC-2.9.5/test/BNFC/Backend/Haskell/CFtoHappySpec.hs0000644000000000000000000000060107346545000020032 0ustar0000000000000000module BNFC.Backend.Haskell.CFtoHappySpec where import Test.Hspec import Text.PrettyPrint (render) import BNFC.Backend.Haskell.CFtoHappy rendersTo a b = render a `shouldBe` b spec = do describe "convert" $ do it "quotes backslashes" $ convert "\\" `rendersTo` "'\\\\'" it "quotes backslashes as part of a longer string" $ convert "/\\" `rendersTo` "'/\\\\'" BNFC-2.9.5/test/BNFC/Backend/HaskellGADTSpec.hs0000644000000000000000000000134707346545000016645 0ustar0000000000000000module BNFC.Backend.HaskellGADTSpec where import BNFC.Options import BNFC.GetCF import Test.Hspec import BNFC.Hspec import BNFC.Backend.HaskellGADT -- SUT calcOptions = defaultOptions { lang = "Calc" } getCalc = parseCF calcOptions TargetHaskellGadt $ unlines [ "EAdd. Exp ::= Exp \"+\" Exp1 ;" , "ESub. Exp ::= Exp \"-\" Exp1 ;" , "EMul. Exp1 ::= Exp1 \"*\" Exp2 ;" , "EDiv. Exp1 ::= Exp1 \"/\" Exp2 ;" , "EInt. Exp2 ::= Integer ;" , "coercions Exp 2 ;" ] spec = describe "C backend" $ it "respect the makefile option" $ do calc <- getCalc let opts = calcOptions { optMake = Just "MyMakefile" } makeHaskellGadt opts calc `shouldGenerate` "MyMakefile" BNFC-2.9.5/test/BNFC/Backend/HaskellSpec.hs0000644000000000000000000000356207346545000016206 0ustar0000000000000000module BNFC.Backend.HaskellSpec where import Test.Hspec import BNFC.GetCF import BNFC.Hspec import BNFC.Options hiding (Backend) import BNFC.Backend.Base import BNFC.Backend.Haskell -- SUT calcOptions = defaultOptions { lang = "Calc" } getCalc = parseCF calcOptions TargetHaskell $ unlines [ "EAdd. Exp ::= Exp \"+\" Exp1 ;" , "ESub. Exp ::= Exp \"-\" Exp1 ;" , "EMul. Exp1 ::= Exp1 \"*\" Exp2 ;" , "EDiv. Exp1 ::= Exp1 \"/\" Exp2 ;" , "EInt. Exp2 ::= Integer ;" , "coercions Exp 2 ;" ] spec = do context "with default option and the Calc grammar" $ do it "generates a file called AbsCalc.hs" $ do calc <- getCalc files <- execBackend (makeHaskell calcOptions calc) map fileName files `shouldSatisfy` elem "AbsCalc.hs" it "generates a file called LexCalc.x" $ do calc <- getCalc files <- execBackend (makeHaskell calcOptions calc) map fileName files `shouldSatisfy` elem "LexCalc.x" it "generates a file called ParCalc.y" $ do calc <- getCalc makeHaskell calcOptions calc `shouldGenerate` "ParCalc.y" it "generates a squeleton file" $ do calc <- getCalc makeHaskell calcOptions calc `shouldGenerate` "SkelCalc.hs" it "generates a pretty printer file" $ do calc <- getCalc makeHaskell calcOptions calc `shouldGenerate` "PrintCalc.hs" it "generates a test program file" $ do calc <- getCalc makeHaskell calcOptions calc `shouldGenerate` "TestCalc.hs" it "generates a error module file" $ do calc <- getCalc makeHaskell calcOptions calc `shouldGenerate` "ErrM.hs" context "with option -mMyMakefile and the Calc grammar" $ do it "generates a Makefile" $ do calc <- getCalc let options = calcOptions { optMake = Just "MyMakefile" } makeHaskell options calc `shouldGenerate` "MyMakefile" BNFC-2.9.5/test/BNFC/Backend/JavaSpec.hs0000644000000000000000000000132007346545000015472 0ustar0000000000000000module BNFC.Backend.JavaSpec where import BNFC.Options import BNFC.GetCF import Test.Hspec import BNFC.Hspec import BNFC.Backend.Java -- SUT calcOptions = defaultOptions { lang = "Calc" } getCalc = parseCF calcOptions TargetJava $ unlines [ "EAdd. Exp ::= Exp \"+\" Exp1 ;" , "ESub. Exp ::= Exp \"-\" Exp1 ;" , "EMul. Exp1 ::= Exp1 \"*\" Exp2 ;" , "EDiv. Exp1 ::= Exp1 \"/\" Exp2 ;" , "EInt. Exp2 ::= Integer ;" , "coercions Exp 2 ;" ] spec = do describe "Java backend" $ it "respect the makefile option" $ do calc <- getCalc let opts = calcOptions { optMake = Just "MyMakefile" } makeJava opts calc `shouldGenerate` "MyMakefile" BNFC-2.9.5/test/BNFC/Backend/LatexSpec.hs0000644000000000000000000000176407346545000015702 0ustar0000000000000000module BNFC.Backend.LatexSpec where import BNFC.Options import BNFC.GetCF import Test.Hspec import BNFC.Hspec import BNFC.Backend.Latex -- SUT calcOptions = defaultOptions { lang = "Calc" } getCalc = parseCF calcOptions TargetLatex $ unlines [ "EAdd. Exp ::= Exp \"+\" Exp1 ;" , "ESub. Exp ::= Exp \"-\" Exp1 ;" , "EMul. Exp1 ::= Exp1 \"*\" Exp2 ;" , "EDiv. Exp1 ::= Exp1 \"/\" Exp2 ;" , "EInt. Exp2 ::= Integer ;" , "coercions Exp 2 ;" ] spec = do describe "LaTeX backend" $ do it "creates the .tex file" $ do calc <- getCalc makeLatex calcOptions calc `shouldGenerate` "Calc.tex" it "creates the Makefile" $ do calc <- getCalc let options = calcOptions { optMake = Just "Makefile" } makeLatex options calc `shouldGenerate` "Makefile" describe "prt" $ do it "correctly converts ^^ into latex \textasciicircum\textasciicircum" $ prt "^^" `shouldBe` "{\\textasciicircum}{\\textasciicircum}" BNFC-2.9.5/test/BNFC/Backend/OCamlSpec.hs0000644000000000000000000000132507346545000015611 0ustar0000000000000000module BNFC.Backend.OCamlSpec where import BNFC.Options import BNFC.GetCF import Test.Hspec import BNFC.Hspec import BNFC.Backend.OCaml -- SUT calcOptions = defaultOptions { lang = "Calc" } getCalc = parseCF calcOptions TargetHaskell $ unlines [ "EAdd. Exp ::= Exp \"+\" Exp1 ;" , "ESub. Exp ::= Exp \"-\" Exp1 ;" , "EMul. Exp1 ::= Exp1 \"*\" Exp2 ;" , "EDiv. Exp1 ::= Exp1 \"/\" Exp2 ;" , "EInt. Exp2 ::= Integer ;" , "coercions Exp 2 ;" ] spec = describe "OCaml backend" $ it "respect the makefile option" $ do calc <- getCalc let opts = calcOptions { optMake = Just "MyMakefile" } makeOCaml opts calc `shouldGenerate` "MyMakefile" BNFC-2.9.5/test/BNFC/0000755000000000000000000000000007346545000012117 5ustar0000000000000000BNFC-2.9.5/test/BNFC/CFSpec.hs0000644000000000000000000000440207346545000013556 0ustar0000000000000000module BNFC.CFSpec where import Test.Hspec -- SUT ("System/software Under Test"): import BNFC.CF spec :: Spec spec = do describe "Show Cat" $ do it "shows (Cat \"Def\") as \"Def\"" $ show (Cat "Def") `shouldBe` "Def" it "shows (ListCat (Cat \"Thing\")) as \"[Thing]\"" $ show (ListCat (Cat "Thing")) `shouldBe` "[Thing]" it "shows (CoercCat \"Expr\" 3) as \"Expr3\"" $ show (CoercCat "Expr" 3) `shouldBe` "Expr3" it "shows (ListCat (CoercCat \"Expr\" 2)) as \"[Expr2]\"" $ show (ListCat (CoercCat "Expr" 2)) `shouldBe` "[Expr2]" it "shows (TokenCat \"Abc\") as \"Abc\"" $ show (TokenCat "Abc") `shouldBe` "Abc" describe "strToCat" $ do it "reads \"Abc\" to Cat \"Abc\"" $ strToCat "Abc" `shouldBe` Cat "Abc" it "reads \"Abc123\" to CoercCat \"Abc\" 123" $ strToCat "Abc123" `shouldBe` CoercCat "Abc" 123 it "reads \"[Expr2]\" to ListCat (CoercCat \"Expr\" 2)" $ strToCat "[Expr2]" `shouldBe` ListCat (CoercCat "Expr" 2) describe "identCat" $ do it "returns ListC for (ListCat (Cat \"C\"))" $ identCat (ListCat (Cat "C")) `shouldBe` "ListC" it "returns C3 for (CoercCat \"C\" 3)" $ identCat (CoercCat "C" 3) `shouldBe` "C3" describe "catOfList" $ do it "returns Cat \"A\" for (ListCat (Cat \"A\"))" $ catOfList (ListCat (Cat "A")) `shouldBe` Cat "A" it "returns Cat \"B\" for (Cat \"B\")" $ catOfList (Cat "B") `shouldBe` Cat "B" describe "precCat" $ do it "returns 0 for a regular category" $ do precCat (Cat "Abc") `shouldBe` 0 it "returns the precedence of a CoercCat" $ precCat (CoercCat "Abc" 4) `shouldBe` 4 it "returns the precedence of a CoercCat inside of a ListCat" $ precCat (ListCat (CoercCat "Abc" 2)) `shouldBe` 2 describe "sameCat" $ do it "considers a category to be the same as itself" $ sameCat (Cat "Abc") (Cat "Abc") `shouldBe` True it "considers Abc3 and Abc5 to be the same" $ sameCat (CoercCat "Abc" 3) (CoercCat "Abc" 5) `shouldBe` True it "considers Abc and Abc4 to be the same" $ sameCat (Cat "Abc") (CoercCat "Abc" 44) `shouldBe` True it "considers Foo and Bar to not be the same" $ sameCat (Cat "Foo") (Cat "Bar") `shouldBe` False BNFC-2.9.5/test/BNFC/GetCFSpec.hs0000644000000000000000000000202607346545000014216 0ustar0000000000000000module BNFC.GetCFSpec where import Test.Hspec -- SUT: import BNFC.GetCF import BNFC.CF import qualified BNFC.Abs as Abs spec :: Spec spec = do describe "transItem" $ do it "translate a non-terminal" $ transItem (Abs.NTerminal (Abs.IdCat (npIdentifier "Foo3"))) `shouldBe` [Left (CoercCat "Foo" 3)] it "translate a terminal" $ transItem (Abs.Terminal "foobar") `shouldBe` [Right "foobar"] it "skips empty terminals" $ transItem (Abs.Terminal "") `shouldBe` [] it "splits multiwords terminals" $ transItem (Abs.Terminal "foo bar") `shouldBe` [Right "foo", Right "bar"] describe "checkRule" $ do it "returns an error if the rule uses an unknown category" $ do let rule = npRule "Foo" (Cat "Bar") [Left (Cat "Baz")] Parsable cf = CFG [] mempty [] [] [] [] [rule] mempty expected = "no production for Baz, appearing in rule\n Foo. Bar ::= Baz" checkRule cf rule `shouldBe` Just expected BNFC-2.9.5/test/BNFC/Hspec.hs0000644000000000000000000000103507346545000013514 0ustar0000000000000000-- | Custom hspec expectations. module BNFC.Hspec where import Text.Printf import BNFC.Backend.Base import Test.Hspec import Test.HUnit ((@?)) -- | Expectation that a backend generates a particular file. shouldGenerate :: Backend -- ^ Backend to run. -> String -- ^ Name of file that should be created during that run. -> Expectation backend `shouldGenerate` file = do files <- execBackend backend let filenames = map fileName files file `elem` filenames @? printf "file %s not found in %s" file (show filenames) BNFC-2.9.5/test/BNFC/OptionsSpec.hs0000644000000000000000000001130707346545000014723 0ustar0000000000000000{-# LANGUAGE FlexibleInstances, UndecidableInstances #-} module BNFC.OptionsSpec where import Control.Monad.Writer (WriterT(..)) import Data.List (intercalate) import System.FilePath ((<.>), takeBaseName) import Test.Hspec import Test.QuickCheck import BNFC.Options -- Expectation that a particular option has a particular value shouldSet :: (Eq a, Show a) => Mode -> (SharedOptions -> a, a) -> Expectation shouldSet (Target opts _) (option, value) = option opts `shouldBe` value spec :: Spec spec = do describe "parseMode" $ do it "returns Help on an empty list of arguments" $ parseMode_ [] `shouldBe` Help it "returns Help if given --help" $ parseMode_ ["--help"] `shouldBe` Help it "returns Version if given --version" $ parseMode_ ["--version"] `shouldBe` Version it "returns an error if help is given an argument" $ isUsageError (parseMode_ ["--help=2"]) `shouldBe` True it "If no language is specified, it should default to haskell" $ parseMode_ ["file.cf"] `shouldSet` (target, TargetHaskell) it "returns an error if the grammar file is missing" $ parseMode_ ["--haskell"] `shouldBe` UsageError "Missing grammar file" it "returns an error if multiple grammar files are given" $ parseMode_ ["--haskell", "file1.cf", "file2.cf"] `shouldBe` UsageError "Too many arguments" it "sets the language name to the basename of the grammar file" $ parseMode_ ["foo.cf"] `shouldSet` (lang, "foo") it "accept 'old style' options" $ do parseMode_ ["-haskell", "-m", "-glr", "file.cf"] `shouldSet` (target, TargetHaskell) parseMode_ ["-haskell", "-m", "-glr", "file.cf"] `shouldSet` (optMake, Just "Makefile") parseMode_ ["-haskell", "-m", "-glr", "file.cf"] `shouldSet` (glr, GLR) it "accept latex as a target language" $ parseMode_ ["--latex", "file.cf"] `shouldSet` (target, TargetLatex) describe "Old option translation" $ do it "translate -haskell to --haskell" $ translateOldOptions ["-haskell"] `shouldBe` (WriterT $ Right (["--haskell"] ,["Warning: unrecognized option -haskell treated as if --haskell was provided."])) describe "--makefile" $ do it "is off by default" $ parseMode_ ["--c", "foo.cf"] `shouldSet` (optMake, Nothing) it "uses the file name 'Makefile' by default" $ parseMode_ ["--c", "-m", "foo.cf"] `shouldSet` (optMake, Just "Makefile") context "when using the option with an argument" $ it "uses the argument as Makefile name" $ parseMode_ ["--c", "-mMyMakefile", "foo.cf"] `shouldSet` (optMake, Just "MyMakefile") where parseMode_ = fst . parseMode isUsageError :: Mode -> Bool isUsageError = \case UsageError{} -> True _ -> False -- ~~~ Arbitrary instances ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ randomOption :: Gen String randomOption = oneof [ nonOption, noArg, withArg ] where nonOption = stringOf1 ['a'..'z'] -- non-option argument noArg = ("--"++) <$> nonOption -- flag withArg = do arg <- nonOption flag <- noArg return $ flag ++ "=" ++ arg -- Helper function that generates a string of random length using the given -- set of characters. Not that the type signature explicitely uses -- [Char] and not String for documentation purposes stringOf :: [Char] -> Gen String stringOf = listOf . elements -- | Same as stringOf but only generates non empty strings stringOf1 :: [Char] -> Gen String stringOf1 = listOf1 . elements instance Arbitrary Target where arbitrary = elements [minBound .. maxBound] -- creates a filepath with the given extension arbitraryFilePath :: String -> Gen FilePath arbitraryFilePath ext = do path <- listOf1 $ stringOf1 ['a'..'z'] return $ intercalate "/" path <.> ext -- Generates unix command line options. Can be in long form (ex: --option) -- or short form (ex: -o) -- Note: we only use letters x,y,z to make (almost) sure that we are not -- going to generate accidentally an global/target language option arbitraryOption :: Gen String arbitraryOption = oneof [arbitraryShortOption, arbitraryLongOption] where arbitraryShortOption = ('-':) . (:[]) <$> elements ['x'..'z'] arbitraryLongOption = ("--" ++) <$> stringOf1 ['x'..'z'] -- Arbitrary instance for Mode instance Arbitrary Mode where arbitrary = oneof [ return Help , return Version , UsageError <$> arbitrary -- generates a random error message , do target' <- arbitrary -- random target cfFile <- arbitraryFilePath "cf" let args = defaultOptions { lang = takeBaseName cfFile, target = target'} return $ Target args cfFile ] BNFC-2.9.5/test/0000755000000000000000000000000007346545000011407 5ustar0000000000000000BNFC-2.9.5/test/unit-tests.hs0000644000000000000000000000153007346545000014061 0ustar0000000000000000{-# OPTIONS_GHC -F -pgmF hspec-discover #-} -- This file is intentionally left blank. -- See http://hspec.github.io/hspec-discover.html -- -- This instructs GHC to invoke hspec-discover as a preprocessor on the -- source file. The rest of the source file is empty, so there is nothing -- to preprocess. Rather than preprocessing, hspec-discover scans the -- file system for all spec files belonging to a project and generates -- the required boilerplate. hspec-discover does not parse any source -- files, it instead relies on the following conventions: -- -- * Spec files have to be placed into the same directory as the test -- driver, or into a subdirectory. -- -- * The name of a spec file has to end in Spec.hs; the module name has -- to match the file name. -- -- * Each spec file has to export a top-level binding spec of type Spec.