happy-1.20.1.1/0000755000000000000000000000000007346545000011244 5ustar0000000000000000happy-1.20.1.1/ChangeLog.md0000644000000000000000000001416207346545000013421 0ustar0000000000000000# Revision history for Happy ## 1.20.1 * Fix for building with mtl-2.3.1 (GHC 9.6) ## 1.20.0 * Fix #121: the -i flag produces an .info file even if the `%expect` pragma is violated * Fix #131: qualify uses of Prelude functions in generated code * Fix #161: drop fewer parse items when generating an .info file * Introduce the `%shift` directive to resolve shift/reduce conflicts explicitly, useful in conjunction with `%expect 0` * Remove the deprecated build configuration flag `small_base` ## 1.19.12 * Fix for building with GHC 8.8.x * Move custom Setup preprocessing steps into a separate executable, like Alex ## 1.19.11 * Fix for building with GHC 8.6.x ## 1.19.10 * Fix polymorphic (rank-n) non-terminals * Fix for GHC 8.8.1 ## 1.19.9 * Fix cabal warnings * Bump upper bounds * Fix build with GHC 8.4.1-alpha ## 1.19.8 * Fix issue #94 (some grammars don't compile due to new type signatures introduced to allow overloading to be used) ## 1.19.7 * Fix missing test suite files in the sdist ## 1.19.6 * Manually generate Parser.hs using Makefile before sdist, to fix bootstrapping problems with cabal sandboxes & new-build * Documentation fixes * Fixed GLR support * New option `-p`/`--pretty` prints the grammar rules (only) to a file * Added generation of additional type signatures to enable use of typeclasses in monadic parsers. ## 1.19.5 * Fixes for GHC 7.10 * Code cleanups (thanks Index Int ) ## 1.19.4 * Fix for GHC 7.10 (Applicative/Monad, #19, #21) ## 1.19.3 * Fix for GHC 7.2 (#16) ## 1.19.2 * Fixes for clang (XCode 5) ## 1.19.1 * Repackaged to build with GHC 7.7+ ## 1.19 * Necessary changes to work with GHC 7.8 ## 1.18.10 * Fix build with GHC 7.6 ## 1.18.8 * Fix a packaging bug (cabal-install-0.10.2 didn't put the Happy-generated files in the sdist) ## 1.18.7 * Fix a bug in error handling when using `%monad` without `%lexer` ## 1.18.5 --- 17 Jun 2010 ## 1.18.4 --- 23 April 2009 ## 1.18.2 --- 5 November 2008 ## 1.18.1 --- 14 October 2008 ## 1.18 --- 13 October 2008 * New feature: EBNF-style paramterized macros, thanks to Iavor Diatchki. * Works with Cabal 1.2, 1.4 and 1.6 * A few minor bugfixes ## 1.17 --- 22 October 2007 * Cabal 1.2 is required * Works with upcoming GHC 6.8.1 * Fix the `parE` bug (poor error message for errors in the grammar) * Some performance improvements to Happy itself ## 1.16 --- 8 January 2007 * Switch to a Cabal build system: you need a recent version of Cabal (1.1.6 or later). If you have GHC 6.4.2, then you need to upgrade Cabal before building Happy. GHC 6.6 is fine. * New `%error` directive * New production forms: `{%% .. }` and `{%^ .. }` * Added Attribute Grammar support, by Robert Dockins ## 1.15 --- 14 January 2005 * New `%expect` directive * The list of tokens passed to happyError now includes the current token (not `%lexer`). * Added support for ambiguous grammars via Generalized LR parsing * Added `%partial` to indicate a parser that can return a result before EOF is reached. ## 1.14 --- 14 April 2004 * New meta-variable `$>` represents the rightmost token. * Happy's OPTIONS pragma is merged with an existing one in the grammar file, if any. ## 1.13 --- 19 June 2002 * Support for newer versions of GHC (>= 5.04). * Addition of an experimental flag: `--strict`. ## 1.11 --- 25 September 2001 * Tokens no longer have a default precedence --- if you want a token to have a precedence, you have to declare it. * Bugfix to templates for GHC on 64-bit platforms. ## 1.10 * Bugfixes, and minor performance improvements, * Most of the examples work again. ## 1.9 * A grammar may now contain several entry points, allowing several parsers to share parts of the grammar. * Some bugfixes. ## 1.8 * Parser table compression, and more efficient table encoding when used with GHC. Large grammars can now be compiled in much less time/space than before using GHC. * Yacc-style operator precedence, thanks to patches from Hermann Oliveira Rodrigues and Josef Svenningsson . * A debug option which causes the generated parser to print tracing information at each step during parsing. ## 1.6 * Now written in, and generates, Haskell 98. * Several bug fixes. * A new option, `-c`, generates parsers that use GHC's `unsafeCoerce#` primitive to speed up parsing and cut down the binary size. The `-c` option can only be used with the -g (GHC extensions) option. * Parsers generated with the -g option will compile to smaller binaries now --- some sources of parser-bloat were identified and squished. * Happy has a new Open Source license, based on the BSD license. * A sample Haskell parser using Happy is included. ## 1.5 * Many bug fixes to the error recovery support, found by experimenting with the Haskell grammar and layout. * Happy is about 5 times faster on large examples, due to some changes in the LALR(1) algorithms. As of version 1.5, Happy is capable of parsing full Haskell. We have a Haskell parser that uses Happy, which will shortly be part of the library collection distributed with GHC. ## 1.2 * Supports Haskell 1.4 * Lots of bugs fixed * Performance: the parser generator is at least 20% faster, and generated parsers should be faster due to the replacement of a data type with a newtype. * Simple error recovery: designed to be enough to implement the Haskell layout rule. * Revamped monad support: the monad can now be threaded through the lexer, enabling passing of state between the parser and the lexer (handy for the Haskell layout rule), and consistent error handling. * The `%newline` feature is removed, the same effect can be achieved using the new monad support. ## 0.9 * Happy should be much faster than before. * Generated parsers will be 5-10% smaller. * Happy now compiles with ghc-0.26. * Support for monadic parsers via `%monad` (see the documentation). * New syntax: previously ```haskell f :: { } f : ... | ... etc. ``` can now be written ```haskell f :: { } : ... | ... etc. ``` (i.e. omit the extra `f`. It was always ignored anyway :-) * Miscellaneous bug fixes. happy-1.20.1.1/LICENSE0000644000000000000000000000266507346545000012262 0ustar0000000000000000The Happy License ----------------- Copyright 2001, Simon Marlow and Andy Gill. All rights reserved. Extensions to implement Tomita's Generalized LR parsing: Copyright 2004, University of Durham, Paul Callaghan and Ben Medlock. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. happy-1.20.1.1/Makefile0000644000000000000000000000314307346545000012705 0ustar0000000000000000CABAL = cabal HAPPY = happy HAPPY_OPTS = -agc HAPPY_VER = `awk '/^version:/ { print $$2 }' happy.cabal` ALEX = alex ALEX_OPTS = -g SDIST_DIR=dist-newstyle/sdist TARBALL="${SDIST_DIR}/happy-$(HAPPY_VER).tar.gz" GEN = src/gen/Parser.hs src/gen/AttrGrammarParser.hs all : $(GEN) src/gen/%.hs : src/boot/%.ly $(HAPPY) $(HAPPYFLAGS) $< -o $@ sdist :: @case "`$(CABAL) --numeric-version`" in \ 2.[2-9].* | [3-9].* ) ;; \ * ) echo "Error: needs cabal 2.2.0.0 or later (but got : `$(CABAL) --numeric-version`)" ; exit 1 ;; \ esac @if [ "`git status -s`" != '' ]; then \ echo Tree is not clean; \ exit 1; \ fi $(HAPPY) $(HAPPY_OPTS) src/Parser.ly -o src/Parser.hs $(HAPPY) $(HAPPY_OPTS) src/AttrGrammarParser.ly -o src/AttrGrammarParser.hs mv src/Parser.ly src/Parser.ly.boot mv src/AttrGrammarParser.ly src/AttrGrammarParser.ly.boot $(CABAL) v2-run gen-happy-sdist cabal v2-sdist @if [ ! -f "${TARBALL}" ]; then \ echo "Error: source tarball not found: ${TARBALL}"; \ exit 1; \ fi git checkout . git clean -f sdist-test :: sdist sdist-test-only @rm -rf "${SDIST_DIR}/happy-${HAPPY_VER}/" sdist-test-only :: @if [ ! -f "${TARBALL}" ]; then \ echo "Error: source tarball not found: ${TARBALL}"; \ exit 1; \ fi rm -rf "${SDIST_DIR}/happy-$(HAPPY_VER)/" tar -xf "${TARBALL}" -C ${SDIST_DIR}/ echo "packages: ." > "${SDIST_DIR}/happy-$(HAPPY_VER)/cabal.project" cd "${SDIST_DIR}/happy-$(HAPPY_VER)/" && cabal v2-test --enable-tests all @echo "" @echo "Success! ${TARBALL} is ready for distribution!" @echo "" # Export name of generated tarball for e.g. use in CI. print-tarball: @echo ${TARBALL} happy-1.20.1.1/Setup.hs0000644000000000000000000000005607346545000012701 0ustar0000000000000000import Distribution.Simple main = defaultMain happy-1.20.1.1/data/0000755000000000000000000000000007346545000012155 5ustar0000000000000000happy-1.20.1.1/data/GLR_Base0000644000000000000000000000475107346545000013465 0ustar0000000000000000{-# LINE 1 "templates/GLR_Base.hs" #-} {- GLR_Base.lhs $Id: GLR_Base.lhs,v 1.4 2004/12/04 15:01:37 paulcc Exp $ -} -- Basic defs required for compiling the data portion of the parser -- We're creating Int-indexed graphs type ForestId = (Int,Int,GSymbol) -- Actions for the GLR machine data GLRAction = Shift Int [Reduction] | Reduce [Reduction] | Accept | Error --- -- A Reduction (s,n,f) removes the top n node-ids, creates a new branch from these -- and labels the branch with the given symbol s. Additionally, the branch may -- hold some semantic value. type Reduction = (GSymbol,Int, [ForestId] -> Branch) --- -- A Branch holds the semantic result plus node ids of children data Branch = Branch {b_sem :: GSem, b_nodes :: [ForestId]} deriving Show instance Eq Branch where b1 == b2 = b_nodes b1 == b_nodes b2 ------------------------------------------------------------------------------- -- Utilities for decoding --- -- Tree decode unpacks the forest into a list of results -- - this is ok for small examples, but inefficient for very large examples -- - the data file contains further instances -- - see documentation for further information -- - "Decode_Result" is a synonym used to insert the monad type constr (or not) class TreeDecode a where decode_b :: (ForestId -> [Branch]) -> Branch -> [Decode_Result a] decode :: TreeDecode a => (ForestId -> [Branch]) -> ForestId -> [Decode_Result a] decode f i@(_,_,HappyTok t) = decode_b f (Branch (SemTok t) []) decode f i = [ d | b <- f i, d <- decode_b f b ] ---- generated by Happy, since it means expansion of synonym (not ok in H-98) --instance TreeDecode UserDefTok where -- decode_b f (Branch (SemTok t) []) = [happy_return t] --- -- this is used to multiply the ambiguous possibilities from children --cross_fn :: [a -> b] -> [a] -> [b] --actual type will depend on monad in use. --happy_ap defined by parser generator cross_fn fs as = [ f `happy_ap` a | f <- fs, a <- as] --- -- Label decoding unpacks from the Semantic wrapper type -- - this allows arbitrary values (within the limits of the compiler settings) -- to be recovered from nodes in the tree. -- - again, more instances are written in the data file -- - see documentation for further information class LabelDecode a where unpack :: GSem -> a ---- generated by Happy, since it means expansion of synonym (not ok in H-98) --instance LabelDecode UserDefTok where -- unpack (SemTok t) = t happy-1.20.1.1/data/GLR_Lib0000644000000000000000000002603407346545000013317 0ustar0000000000000000{-# LINE 1 "templates/GLR_Lib.hs" #-} {-# LINE 1 "GLR_Lib.hs" #-} {- GLR_Lib.lhs $Id: GLR_Lib.lhs,v 1.5 2005/08/03 13:42:23 paulcc Exp $ -} {- Parser driver for the GLR parser. (c) University of Durham, Ben Medlock 2001 -- initial code, for structure parsing (c) University of Durham, Paul Callaghan 2004-05 -- extension to semantic rules -- shifting to chart data structure -- supporting hidden left recursion -- many optimisations -} {- supplied by Happy <> module XYZ ( <> lexer -- conditional -} -- probable, but might want to parametrise , doParse , TreeDecode(..), decode -- only for tree decode , LabelDecode(..) -- only for label decode -- standard exports , Tokens , GLRResult(..) , NodeMap , RootNode , ForestId , GSymbol(..) , Branch(..) , GSem(..) ) where import Data.Char import qualified Data.Map as Map import Control.Applicative (Applicative(..)) import Control.Monad (foldM, ap) import Data.Maybe (fromJust) import Data.List (insertBy, nub, maximumBy, partition, find, groupBy, delete) {- these inserted by Happy -} fakeimport DATA {- borrowed from GenericTemplate.hs -} doParse = glr_parse ---------------------------------------------------------------------------- -- Main data types -- A forest is a map of `spans' to branches, where a span is a start position, -- and end position, and a grammatical category for that interval. Branches -- are lists of conjunctions of symbols which can be matched in that span. -- Note that tokens are stored as part of the spans. type Forest = Map.Map ForestId [Branch] --- -- End result of parsing: -- - successful parse with rooted forest -- - else syntax error or premature eof type NodeMap = [(ForestId, [Branch])] type RootNode = ForestId type Tokens = [[(Int, GSymbol)]] -- list of ambiguous lexemes data GLRResult = ParseOK RootNode Forest -- forest with root | ParseError Tokens Forest -- partial forest with bad input | ParseEOF Forest -- partial forest (missing input) ----------------------- -- Forest to simplified output forestResult :: Int -> Forest -> GLRResult forestResult length f = case roots of [] -> ParseEOF f [r] -> ParseOK r f rs@(_:_) -> error $ "multiple roots in forest, = " ++ show rs ++ unlines (map show ns_map) where ns_map = Map.toList f roots = [ r | (r@(0,sz,sym),_) <- ns_map , sz == length , sym == top_symbol ] ---------------------------------------------------------------------------- glr_parse :: [[UserDefTok]] -> GLRResult glr_parse toks = case runST Map.empty [0..] (tp toks) of (f,Left ts) -> ParseError ts f -- Error within sentence (f,Right ss) -> forestResult (length toks) f -- Either good parse or EOF where tp tss = doActions [initTS 0] $ zipWith (\i ts -> [(i, t) | t <- ts]) [0..] $ [ [ HappyTok {-j-} t | (j,t) <- zip [0..] ts ] | ts <- tss ] ++ [[HappyEOF]] --- type PM a = ST Forest [Int] a type FStack = TStack ForestId --- -- main function doActions :: [FStack] -> Tokens -> PM (Either Tokens [FStack]) doActions ss [] -- no more tokens (this is ok) = return (Right ss) -- return the stacks (may be empty) doActions stks (tok:toks) = do stkss <- sequence [ do stks' <- reduceAll [] tok_form stks shiftAll tok_form stks' | tok_form <- tok ] let new_stks = merge $ concat stkss {- nothing -} case new_stks of -- did this token kill stacks? [] -> case toks of [] -> return $ Right [] -- ok if no more tokens _:_ -> return $ Left (tok:toks) -- not ok if some input left _ -> doActions new_stks toks reduceAll :: [GSymbol] -> (Int, GSymbol) -> [FStack] -> PM [(FStack, Int)] reduceAll _ tok [] = return [] reduceAll cyclic_names itok@(i,tok) (stk:stks) = do case action this_state tok of Accept -> reduceAll [] itok stks Error -> reduceAll [] itok stks Shift st rs -> do { ss <- redAll rs ; return $ (stk,st) : ss } Reduce rs -> redAll rs where this_state = top stk redAll rs = do let reds = [ (bf fids,stk',m) | (m,n,bf) <- rs , not (n == 0 && m `elem` cyclic_names) -- remove done ones , (fids,stk') <- pop n stk ] -- WARNING: incomplete if more than one Empty in a prod(!) -- WARNING: can avoid by splitting emps/non-emps {- nothing -} stks' <- foldM (pack i) stks reds let new_cyclic = [ m | (m,0,_) <- rs , (this_state == goto this_state m) , m `notElem` cyclic_names ] reduceAll (cyclic_names ++ new_cyclic) itok $ merge stks' shiftAll :: (Int, GSymbol) -> [(FStack, Int)] -> PM [FStack] shiftAll tok [] = return [] shiftAll (j,tok) stks = do let end = j + 1 let key = end `seq` (j,end,tok) newNode key let mss = [ (stk, st) | ss@((_,st):_) <- groupBy (\a b -> snd a == snd b) stks , stk <- merge $ map fst ss ] stks' <- sequence [ do { nid <- getID ; return (push key st nid stk) } | (stk,(st)) <- mss ] return stks' pack :: Int -> [FStack] -> (Branch, FStack, GSymbol) -> PM [FStack] pack e_i stks (fids,stk,m) | (st < (0)) = return stks | otherwise = do let s_i = endpoint stk let key = (s_i,e_i,m) {- nothing -} duplicate <- addBranch key fids let stack_matches = [ s | s <- stks , (top s == st) , let (k,s') = case ts_tail s of x:_ -> x , stk == s' , k == key ] -- look for first obvious packing site let appears_in = not $ null stack_matches {- nothing -} {- nothing -} if duplicate && appears_in then return stks -- because already there else do nid <- getID case stack_matches of [] -> return $ insertStack (push key st nid stk) stks -- No prior stacks s:_ -> return $ insertStack (push key st nid stk) (delete s stks) -- pack into an existing stack where st = goto (top stk) m --- -- record an entry -- - expected: "i" will contain a token newNode :: ForestId -> PM () newNode i = chgS $ \f -> ((), Map.insert i [] f) --- -- add a new branch -- - due to packing, we check to see if a branch is already there -- - return True if the branch is already there addBranch :: ForestId -> Branch -> PM Bool addBranch i b = do f <- useS id case Map.lookup i f of Nothing -> chgS $ \f -> (False, Map.insert i [b] f) Just bs | b `elem` bs -> return True | otherwise -> chgS $ \f -> (True, Map.insert i (b:bs) f) --- -- only for use with nodes that exist getBranches :: ForestId -> PM [Branch] getBranches i = useS $ \s -> Map.findWithDefault no_such_node i s where no_such_node = error $ "No such node in Forest: " ++ show i ----------------------------------------------------------------------------- -- Auxiliary functions (<>) x y = (x,y) -- syntactic sugar -- Tomita stack -- - basic idea taken from Peter Ljungloef's Licentiate thesis data TStack a = TS { top :: Int -- state , ts_id :: Int -- ID , stoup :: !(Maybe a) -- temp holding place, for left rec. , ts_tail :: ![(a,TStack a)] -- [(element on arc , child)] } instance Show a => Show (TStack a) where show ts = "St" ++ show ((top ts)) --- -- id uniquely identifies a stack instance Eq (TStack a) where s1 == s2 = (ts_id s1 == ts_id s2) --instance Ord (TStack a) where -- s1 `compare` s2 = IBOX(ts_id s1) `compare` IBOX(ts_id s2) --- -- Nothing special done for insertion -- - NB merging done at strategic points insertStack :: TStack a -> [TStack a] -> [TStack a] insertStack = (:) --- initTS :: Int -> TStack a initTS (id) = TS (0) id Nothing [] --- push :: ForestId -> Int -> Int -> TStack ForestId -> TStack ForestId push x@(s_i,e_i,m) st (id) stk = TS st id stoup [(x,stk)] where -- only fill stoup for cyclic states that don't consume input stoup | s_i == e_i && (st == goto st m) = Just x | otherwise = Nothing --- pop :: Int -> TStack a -> [([a],TStack a)] pop 0 ts = [([],ts)] pop 1 st@TS{stoup=Just x} = pop 1 st{stoup=Nothing} ++ [ ([x],st) ] pop n ts = [ (xs ++ [x] , stk') | (x,stk) <- ts_tail ts , (xs,stk') <- pop (n-1) stk ] --- popF :: TStack a -> TStack a popF ts = case ts_tail ts of (_,c):_ -> c --- endpoint stk = case ts_tail stk of [] -> 0 ((_,e_i,_),_):_ -> e_i --- merge :: (Eq a, Show a) => [TStack a] -> [TStack a] merge stks = [ TS st id ss (nub ch) | (st) <- nub (map (\s -> (top s)) stks) , let ch = concat [ x | TS st2 _ _ x <- stks, (st == st2) ] ss = mkss [ s | TS st2 _ s _ <- stks, (st == st2) ] ( (id)) = head [ (i) | TS st2 i _ _ <- stks, (st == st2) ] -- reuse of id is ok, since merge discards old stacks ] where mkss s = case nub [ x | Just x <- s ] of [] -> Nothing [x] -> Just x xs -> error $ unlines $ ("Stoup merge: " ++ show xs) : map show stks ---------------------------------------------------------------------------- -- Monad -- TODO (pcc): combine the s/i, or use the modern libraries - might be faster? -- but some other things are much, much, much more expensive! data ST s i a = MkST (s -> i -> (a,s,i)) instance Functor (ST s i) where fmap f (MkST sf) = MkST $ \s i -> case sf s i of (a,s',i') -> (f a,s',i') instance Applicative (ST s i) where pure a = MkST $ \s i -> (a,s,i) (<*>) = ap instance Monad (ST s i) where return = pure MkST sf >>= k = MkST $ \s i -> case sf s i of (a,s',i') -> let (MkST sf') = k a in sf' s' i' runST :: s -> i -> ST s i a -> (s,a) runST s i (MkST sf) = case sf s i of (a,s,_) -> (s,a) chgS :: (s -> (a,s)) -> ST s i a chgS sf = MkST $ \s i -> let (a,s') = sf s in (a,s',i) useS :: (s -> b) -> ST s i b useS fn = MkST $ \s i -> (fn s,s,i) getID :: ST s [Int] Int getID = MkST $ \s (i:is) -> (i,s,is) happy-1.20.1.1/data/GLR_Lib-ghc0000644000000000000000000002615407346545000014061 0ustar0000000000000000{-# LINE 1 "templates/GLR_Lib.hs" #-} {-# LINE 1 "GLR_Lib.hs" #-} {- GLR_Lib.lhs $Id: GLR_Lib.lhs,v 1.5 2005/08/03 13:42:23 paulcc Exp $ -} {- Parser driver for the GLR parser. (c) University of Durham, Ben Medlock 2001 -- initial code, for structure parsing (c) University of Durham, Paul Callaghan 2004-05 -- extension to semantic rules -- shifting to chart data structure -- supporting hidden left recursion -- many optimisations -} {- supplied by Happy <> module XYZ ( <> lexer -- conditional -} -- probable, but might want to parametrise , doParse , TreeDecode(..), decode -- only for tree decode , LabelDecode(..) -- only for label decode -- standard exports , Tokens , GLRResult(..) , NodeMap , RootNode , ForestId , GSymbol(..) , Branch(..) , GSem(..) ) where import Data.Char import qualified Data.Map as Map import Control.Applicative (Applicative(..)) import Control.Monad (foldM, ap) import Data.Maybe (fromJust) import Data.List (insertBy, nub, maximumBy, partition, find, groupBy, delete) import GHC.Prim import GHC.Exts {- these inserted by Happy -} fakeimport DATA {- borrowed from GenericTemplate.hs -} doParse = glr_parse ---------------------------------------------------------------------------- -- Main data types -- A forest is a map of `spans' to branches, where a span is a start position, -- and end position, and a grammatical category for that interval. Branches -- are lists of conjunctions of symbols which can be matched in that span. -- Note that tokens are stored as part of the spans. type Forest = Map.Map ForestId [Branch] --- -- End result of parsing: -- - successful parse with rooted forest -- - else syntax error or premature eof type NodeMap = [(ForestId, [Branch])] type RootNode = ForestId type Tokens = [[(Int, GSymbol)]] -- list of ambiguous lexemes data GLRResult = ParseOK RootNode Forest -- forest with root | ParseError Tokens Forest -- partial forest with bad input | ParseEOF Forest -- partial forest (missing input) ----------------------- -- Forest to simplified output forestResult :: Int -> Forest -> GLRResult forestResult length f = case roots of [] -> ParseEOF f [r] -> ParseOK r f rs@(_:_) -> error $ "multiple roots in forest, = " ++ show rs ++ unlines (map show ns_map) where ns_map = Map.toList f roots = [ r | (r@(0,sz,sym),_) <- ns_map , sz == length , sym == top_symbol ] ---------------------------------------------------------------------------- glr_parse :: [[UserDefTok]] -> GLRResult glr_parse toks = case runST Map.empty [0..] (tp toks) of (f,Left ts) -> ParseError ts f -- Error within sentence (f,Right ss) -> forestResult (length toks) f -- Either good parse or EOF where tp tss = doActions [initTS 0] $ zipWith (\i ts -> [(i, t) | t <- ts]) [0..] $ [ [ HappyTok {-j-} t | (j,t) <- zip [0..] ts ] | ts <- tss ] ++ [[HappyEOF]] --- type PM a = ST Forest [Int] a type FStack = TStack ForestId --- -- main function doActions :: [FStack] -> Tokens -> PM (Either Tokens [FStack]) doActions ss [] -- no more tokens (this is ok) = return (Right ss) -- return the stacks (may be empty) doActions stks (tok:toks) = do stkss <- sequence [ do stks' <- reduceAll [] tok_form stks shiftAll tok_form stks' | tok_form <- tok ] let new_stks = merge $ concat stkss {- nothing -} case new_stks of -- did this token kill stacks? [] -> case toks of [] -> return $ Right [] -- ok if no more tokens _:_ -> return $ Left (tok:toks) -- not ok if some input left _ -> doActions new_stks toks reduceAll :: [GSymbol] -> (Int, GSymbol) -> [FStack] -> PM [(FStack, Int)] reduceAll _ tok [] = return [] reduceAll cyclic_names itok@(i,tok) (stk:stks) = do case action this_state tok of Accept -> reduceAll [] itok stks Error -> reduceAll [] itok stks Shift st rs -> do { ss <- redAll rs ; return $ (stk,st) : ss } Reduce rs -> redAll rs where this_state = top stk redAll rs = do let reds = [ (bf fids,stk',m) | (m,n,bf) <- rs , not (n == 0 && m `elem` cyclic_names) -- remove done ones , (fids,stk') <- pop n stk ] -- WARNING: incomplete if more than one Empty in a prod(!) -- WARNING: can avoid by splitting emps/non-emps {- nothing -} stks' <- foldM (pack i) stks reds let new_cyclic = [ m | (m,0,_) <- rs , (this_state ==# goto this_state m) , m `notElem` cyclic_names ] reduceAll (cyclic_names ++ new_cyclic) itok $ merge stks' shiftAll :: (Int, GSymbol) -> [(FStack, Int)] -> PM [FStack] shiftAll tok [] = return [] shiftAll (j,tok) stks = do let end = j + 1 let key = end `seq` (j,end,tok) newNode key let mss = [ (stk, st) | ss@((_,st):_) <- groupBy (\a b -> snd a == snd b) stks , stk <- merge $ map fst ss ] stks' <- sequence [ do { nid <- getID ; return (push key st nid stk) } | (stk,(I# (st))) <- mss ] return stks' pack :: Int -> [FStack] -> (Branch, FStack, GSymbol) -> PM [FStack] pack e_i stks (fids,stk,m) | (st <# 0#) = return stks | otherwise = do let s_i = endpoint stk let key = (s_i,e_i,m) {- nothing -} duplicate <- addBranch key fids let stack_matches = [ s | s <- stks , (top s ==# st) , let (k,s') = case ts_tail s of x:_ -> x , stk == s' , k == key ] -- look for first obvious packing site let appears_in = not $ null stack_matches {- nothing -} {- nothing -} if duplicate && appears_in then return stks -- because already there else do nid <- getID case stack_matches of [] -> return $ insertStack (push key st nid stk) stks -- No prior stacks s:_ -> return $ insertStack (push key st nid stk) (delete s stks) -- pack into an existing stack where st = goto (top stk) m --- -- record an entry -- - expected: "i" will contain a token newNode :: ForestId -> PM () newNode i = chgS $ \f -> ((), Map.insert i [] f) --- -- add a new branch -- - due to packing, we check to see if a branch is already there -- - return True if the branch is already there addBranch :: ForestId -> Branch -> PM Bool addBranch i b = do f <- useS id case Map.lookup i f of Nothing -> chgS $ \f -> (False, Map.insert i [b] f) Just bs | b `elem` bs -> return True | otherwise -> chgS $ \f -> (True, Map.insert i (b:bs) f) --- -- only for use with nodes that exist getBranches :: ForestId -> PM [Branch] getBranches i = useS $ \s -> Map.findWithDefault no_such_node i s where no_such_node = error $ "No such node in Forest: " ++ show i ----------------------------------------------------------------------------- -- Auxiliary functions (<>) x y = (x,y) -- syntactic sugar -- Tomita stack -- - basic idea taken from Peter Ljungloef's Licentiate thesis data TStack a = TS { top :: Int# -- state , ts_id :: Int# -- ID , stoup :: !(Maybe a) -- temp holding place, for left rec. , ts_tail :: ![(a,TStack a)] -- [(element on arc , child)] } instance Show a => Show (TStack a) where show ts = "St" ++ show ((I# (top ts))) --- -- id uniquely identifies a stack instance Eq (TStack a) where s1 == s2 = (ts_id s1 ==# ts_id s2) --instance Ord (TStack a) where -- s1 `compare` s2 = IBOX(ts_id s1) `compare` IBOX(ts_id s2) --- -- Nothing special done for insertion -- - NB merging done at strategic points insertStack :: TStack a -> [TStack a] -> [TStack a] insertStack = (:) --- initTS :: Int -> TStack a initTS (I# (id)) = TS 0# id Nothing [] --- push :: ForestId -> Int# -> Int -> TStack ForestId -> TStack ForestId push x@(s_i,e_i,m) st (I# (id)) stk = TS st id stoup [(x,stk)] where -- only fill stoup for cyclic states that don't consume input stoup | s_i == e_i && (st ==# goto st m) = Just x | otherwise = Nothing --- pop :: Int -> TStack a -> [([a],TStack a)] pop 0 ts = [([],ts)] pop 1 st@TS{stoup=Just x} = pop 1 st{stoup=Nothing} ++ [ ([x],st) ] pop n ts = [ (xs ++ [x] , stk') | (x,stk) <- ts_tail ts , (xs,stk') <- pop (n-1) stk ] --- popF :: TStack a -> TStack a popF ts = case ts_tail ts of (_,c):_ -> c --- endpoint stk = case ts_tail stk of [] -> 0 ((_,e_i,_),_):_ -> e_i --- merge :: (Eq a, Show a) => [TStack a] -> [TStack a] merge stks = [ TS st id ss (nub ch) | (I# (st)) <- nub (map (\s -> (I# (top s))) stks) , let ch = concat [ x | TS st2 _ _ x <- stks, (st ==# st2) ] ss = mkss [ s | TS st2 _ s _ <- stks, (st ==# st2) ] (! (I# (id))) = head [ (I# (i)) | TS st2 i _ _ <- stks, (st ==# st2) ] -- reuse of id is ok, since merge discards old stacks ] where mkss s = case nub [ x | Just x <- s ] of [] -> Nothing [x] -> Just x xs -> error $ unlines $ ("Stoup merge: " ++ show xs) : map show stks ---------------------------------------------------------------------------- -- Monad -- TODO (pcc): combine the s/i, or use the modern libraries - might be faster? -- but some other things are much, much, much more expensive! data ST s i a = MkST (s -> i -> (a,s,i)) instance Functor (ST s i) where fmap f (MkST sf) = MkST $ \s i -> case sf s i of (a,s',i') -> (f a,s',i') instance Applicative (ST s i) where pure a = MkST $ \s i -> (a,s,i) (<*>) = ap instance Monad (ST s i) where return = pure MkST sf >>= k = MkST $ \s i -> case sf s i of (a,s',i') -> let (MkST sf') = k a in sf' s' i' runST :: s -> i -> ST s i a -> (s,a) runST s i (MkST sf) = case sf s i of (a,s,_) -> (s,a) chgS :: (s -> (a,s)) -> ST s i a chgS sf = MkST $ \s i -> let (a,s') = sf s in (a,s',i) useS :: (s -> b) -> ST s i b useS fn = MkST $ \s i -> (fn s,s,i) getID :: ST s [Int] Int getID = MkST $ \s (i:is) -> (i,s,is) happy-1.20.1.1/data/GLR_Lib-ghc-debug0000644000000000000000000003025307346545000015140 0ustar0000000000000000{-# LINE 1 "templates/GLR_Lib.hs" #-} {-# LINE 1 "GLR_Lib.hs" #-} {- GLR_Lib.lhs $Id: GLR_Lib.lhs,v 1.5 2005/08/03 13:42:23 paulcc Exp $ -} {- Parser driver for the GLR parser. (c) University of Durham, Ben Medlock 2001 -- initial code, for structure parsing (c) University of Durham, Paul Callaghan 2004-05 -- extension to semantic rules -- shifting to chart data structure -- supporting hidden left recursion -- many optimisations -} {- supplied by Happy <> module XYZ ( <> lexer -- conditional -} -- probable, but might want to parametrise , doParse , TreeDecode(..), decode -- only for tree decode , LabelDecode(..) -- only for label decode -- standard exports , Tokens , GLRResult(..) , NodeMap , RootNode , ForestId , GSymbol(..) , Branch(..) , GSem(..) ) where import Data.Char import qualified Data.Map as Map import Control.Applicative (Applicative(..)) import Control.Monad (foldM, ap) import Data.Maybe (fromJust) import Data.List (insertBy, nub, maximumBy, partition, find, groupBy, delete) import GHC.Prim import GHC.Exts import System.IO import System.IO.Unsafe import Text.PrettyPrint {- these inserted by Happy -} fakeimport DATA {- borrowed from GenericTemplate.hs -} happyTrace string expr = unsafePerformIO $ do hPutStr stderr string return expr doParse = glr_parse ---------------------------------------------------------------------------- -- Main data types -- A forest is a map of `spans' to branches, where a span is a start position, -- and end position, and a grammatical category for that interval. Branches -- are lists of conjunctions of symbols which can be matched in that span. -- Note that tokens are stored as part of the spans. type Forest = Map.Map ForestId [Branch] --- -- End result of parsing: -- - successful parse with rooted forest -- - else syntax error or premature eof type NodeMap = [(ForestId, [Branch])] type RootNode = ForestId type Tokens = [[(Int, GSymbol)]] -- list of ambiguous lexemes data GLRResult = ParseOK RootNode Forest -- forest with root | ParseError Tokens Forest -- partial forest with bad input | ParseEOF Forest -- partial forest (missing input) ----------------------- -- Forest to simplified output forestResult :: Int -> Forest -> GLRResult forestResult length f = case roots of [] -> ParseEOF f [r] -> ParseOK r f rs@(_:_) -> error $ "multiple roots in forest, = " ++ show rs ++ unlines (map show ns_map) where ns_map = Map.toList f roots = [ r | (r@(0,sz,sym),_) <- ns_map , sz == length , sym == top_symbol ] ---------------------------------------------------------------------------- glr_parse :: [[UserDefTok]] -> GLRResult glr_parse toks = case runST Map.empty [0..] (tp toks) of (f,Left ts) -> ParseError ts f -- Error within sentence (f,Right ss) -> forestResult (length toks) f -- Either good parse or EOF where tp tss = doActions [initTS 0] $ zipWith (\i ts -> [(i, t) | t <- ts]) [0..] $ [ [ HappyTok {-j-} t | (j,t) <- zip [0..] ts ] | ts <- tss ] ++ [[HappyEOF]] --- type PM a = ST Forest [Int] a type FStack = TStack ForestId --- -- main function doActions :: [FStack] -> Tokens -> PM (Either Tokens [FStack]) doActions ss [] -- no more tokens (this is ok) = return (Right ss) -- return the stacks (may be empty) doActions stks (tok:toks) = do stkss <- sequence [ do stks' <- reduceAll [] tok_form stks shiftAll tok_form stks' | tok_form <- tok ] let new_stks = merge $ concat stkss (happyTrace (unlines $ ("Stacks after R*/S pass" ++ show tok) : map show new_stks) $ return ()) case new_stks of -- did this token kill stacks? [] -> case toks of [] -> return $ Right [] -- ok if no more tokens _:_ -> return $ Left (tok:toks) -- not ok if some input left _ -> doActions new_stks toks reduceAll :: [GSymbol] -> (Int, GSymbol) -> [FStack] -> PM [(FStack, Int)] reduceAll _ tok [] = return [] reduceAll cyclic_names itok@(i,tok) (stk:stks) = do case action this_state tok of Accept -> reduceAll [] itok stks Error -> reduceAll [] itok stks Shift st rs -> do { ss <- redAll rs ; return $ (stk,st) : ss } Reduce rs -> redAll rs where this_state = top stk redAll rs = do let reds = [ (bf fids,stk',m) | (m,n,bf) <- rs , not (n == 0 && m `elem` cyclic_names) -- remove done ones , (fids,stk') <- pop n stk ] -- WARNING: incomplete if more than one Empty in a prod(!) -- WARNING: can avoid by splitting emps/non-emps (happyTrace (unlines $ ("Packing reds = " ++ show (length reds)) : map show reds) $ return ()) stks' <- foldM (pack i) stks reds let new_cyclic = [ m | (m,0,_) <- rs , (this_state ==# goto this_state m) , m `notElem` cyclic_names ] reduceAll (cyclic_names ++ new_cyclic) itok $ merge stks' shiftAll :: (Int, GSymbol) -> [(FStack, Int)] -> PM [FStack] shiftAll tok [] = return [] shiftAll (j,tok) stks = do let end = j + 1 let key = end `seq` (j,end,tok) newNode key let mss = [ (stk, st) | ss@((_,st):_) <- groupBy (\a b -> snd a == snd b) stks , stk <- merge $ map fst ss ] stks' <- sequence [ do { nid <- getID ; return (push key st nid stk) } | (stk,(I# (st))) <- mss ] return stks' pack :: Int -> [FStack] -> (Branch, FStack, GSymbol) -> PM [FStack] pack e_i stks (fids,stk,m) | (st <# 0#) = return stks | otherwise = do let s_i = endpoint stk let key = (s_i,e_i,m) (happyTrace (unlines $ ("Pack at " ++ show key ++ " " ++ show fids) : ("**" ++ show stk) : map show stks) $ return ()) duplicate <- addBranch key fids let stack_matches = [ s | s <- stks , (top s ==# st) , let (k,s') = case ts_tail s of x:_ -> x , stk == s' , k == key ] -- look for first obvious packing site let appears_in = not $ null stack_matches (happyTrace (unlines $ ("Stack Matches: " ++ show (length stack_matches)) : map show stack_matches) $ return ()) (happyTrace (if not (duplicate && appears_in) then "" else unlines $ ("DROP:" ++ show ((I# (st)),key) ++ " -- " ++ show stk) : "*****" : map show stks) $ return ()) if duplicate && appears_in then return stks -- because already there else do nid <- getID case stack_matches of [] -> return $ insertStack (push key st nid stk) stks -- No prior stacks s:_ -> return $ insertStack (push key st nid stk) (delete s stks) -- pack into an existing stack where st = goto (top stk) m --- -- record an entry -- - expected: "i" will contain a token newNode :: ForestId -> PM () newNode i = chgS $ \f -> ((), Map.insert i [] f) --- -- add a new branch -- - due to packing, we check to see if a branch is already there -- - return True if the branch is already there addBranch :: ForestId -> Branch -> PM Bool addBranch i b = do f <- useS id case Map.lookup i f of Nothing -> chgS $ \f -> (False, Map.insert i [b] f) Just bs | b `elem` bs -> return True | otherwise -> chgS $ \f -> (True, Map.insert i (b:bs) f) --- -- only for use with nodes that exist getBranches :: ForestId -> PM [Branch] getBranches i = useS $ \s -> Map.findWithDefault no_such_node i s where no_such_node = error $ "No such node in Forest: " ++ show i ----------------------------------------------------------------------------- -- Auxiliary functions (<>) x y = (x,y) -- syntactic sugar -- Tomita stack -- - basic idea taken from Peter Ljungloef's Licentiate thesis data TStack a = TS { top :: Int# -- state , ts_id :: Int# -- ID , stoup :: !(Maybe a) -- temp holding place, for left rec. , ts_tail :: ![(a,TStack a)] -- [(element on arc , child)] } instance Show a => Show (TStack a) where show ts = "St" ++ show ((I# (top ts))) ++ "\n" ++ render (spp $ ts_tail ts) where spp ss = nest 2 $ vcat [ vcat [text (show (v,(I# (top s)))), spp (ts_tail s)] | (v,s) <- ss ] --- -- id uniquely identifies a stack instance Eq (TStack a) where s1 == s2 = (ts_id s1 ==# ts_id s2) --instance Ord (TStack a) where -- s1 `compare` s2 = IBOX(ts_id s1) `compare` IBOX(ts_id s2) --- -- Nothing special done for insertion -- - NB merging done at strategic points insertStack :: TStack a -> [TStack a] -> [TStack a] insertStack = (:) --- initTS :: Int -> TStack a initTS (I# (id)) = TS 0# id Nothing [] --- push :: ForestId -> Int# -> Int -> TStack ForestId -> TStack ForestId push x@(s_i,e_i,m) st (I# (id)) stk = TS st id stoup [(x,stk)] where -- only fill stoup for cyclic states that don't consume input stoup | s_i == e_i && (st ==# goto st m) = Just x | otherwise = Nothing --- pop :: Int -> TStack a -> [([a],TStack a)] pop 0 ts = [([],ts)] pop 1 st@TS{stoup=Just x} = pop 1 st{stoup=Nothing} ++ [ ([x],st) ] pop n ts = [ (xs ++ [x] , stk') | (x,stk) <- ts_tail ts , (xs,stk') <- pop (n-1) stk ] --- popF :: TStack a -> TStack a popF ts = case ts_tail ts of (_,c):_ -> c --- endpoint stk = case ts_tail stk of [] -> 0 ((_,e_i,_),_):_ -> e_i --- merge :: (Eq a, Show a) => [TStack a] -> [TStack a] merge stks = [ TS st id ss (nub ch) | (I# (st)) <- nub (map (\s -> (I# (top s))) stks) , let ch = concat [ x | TS st2 _ _ x <- stks, (st ==# st2) ] ss = mkss [ s | TS st2 _ s _ <- stks, (st ==# st2) ] (! (I# (id))) = head [ (I# (i)) | TS st2 i _ _ <- stks, (st ==# st2) ] -- reuse of id is ok, since merge discards old stacks ] where mkss s = case nub [ x | Just x <- s ] of [] -> Nothing [x] -> Just x xs -> error $ unlines $ ("Stoup merge: " ++ show xs) : map show stks ---------------------------------------------------------------------------- -- Monad -- TODO (pcc): combine the s/i, or use the modern libraries - might be faster? -- but some other things are much, much, much more expensive! data ST s i a = MkST (s -> i -> (a,s,i)) instance Functor (ST s i) where fmap f (MkST sf) = MkST $ \s i -> case sf s i of (a,s',i') -> (f a,s',i') instance Applicative (ST s i) where pure a = MkST $ \s i -> (a,s,i) (<*>) = ap instance Monad (ST s i) where return = pure MkST sf >>= k = MkST $ \s i -> case sf s i of (a,s',i') -> let (MkST sf') = k a in sf' s' i' runST :: s -> i -> ST s i a -> (s,a) runST s i (MkST sf) = case sf s i of (a,s,_) -> (s,a) chgS :: (s -> (a,s)) -> ST s i a chgS sf = MkST $ \s i -> let (a,s') = sf s in (a,s',i) useS :: (s -> b) -> ST s i b useS fn = MkST $ \s i -> (fn s,s,i) getID :: ST s [Int] Int getID = MkST $ \s (i:is) -> (i,s,is) happy-1.20.1.1/data/HappyTemplate0000644000000000000000000001577407346545000014673 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ data Happy_IntList = HappyCons Prelude.Int Happy_IntList infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept (1) tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action indexShortOffAddr arr off = arr Happy_Data_Array.! off {-# INLINE happyLt #-} happyLt x y = (x Prelude.< y) readArrayBit arr bit = Bits.testBit (indexShortOffAddr arr (bit `Prelude.div` 16)) (bit `Prelude.mod` 16) ----------------------------------------------------------------------------- -- HappyState data type (not arrays) newtype HappyState b c = HappyState (Prelude.Int -> -- token number Prelude.Int -> -- token number (yes, again) b -> -- token semantic value HappyState b c -> -- current state [HappyState b c] -> -- state stack c) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state (1) tk st sts stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (i) -> i }) in -- trace "shifting the error token" $ new_state i i tk (HappyState (new_state)) ((st):(sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state ((st):(sts)) ((HappyTerminal (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn (1) tk st sts stk = happyFail [] (1) tk st sts stk happySpecReduce_0 nt fn j tk st@((HappyState (action))) sts stk = action nt j tk st ((st):(sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn (1) tk st sts stk = happyFail [] (1) tk st sts stk happySpecReduce_1 nt fn j tk _ sts@(((st@(HappyState (action))):(_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn (1) tk st sts stk = happyFail [] (1) tk st sts stk happySpecReduce_2 nt fn j tk _ ((_):(sts@(((st@(HappyState (action))):(_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn (1) tk st sts stk = happyFail [] (1) tk st sts stk happySpecReduce_3 nt fn j tk _ ((_):(((_):(sts@(((st@(HappyState (action))):(_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn (1) tk st sts stk = happyFail [] (1) tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Prelude.- ((1) :: Prelude.Int)) sts of sts1@(((st1@(HappyState (action))):(_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (action nt j tk st1 sts1 r) happyMonadReduce k nt fn (1) tk st sts stk = happyFail [] (1) tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k ((st):(sts)) of sts1@(((st1@(HappyState (action))):(_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> action nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn (1) tk st sts stk = happyFail [] (1) tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k ((st):(sts)) of sts1@(((st1@(HappyState (action))):(_))) -> let drop_stk = happyDropStk k stk _ = nt :: Prelude.Int new_state = action in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop (0) l = l happyDrop n ((_):(t)) = happyDrop (n Prelude.- ((1) :: Prelude.Int)) t happyDropStk (0) l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Prelude.- ((1)::Prelude.Int)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto action j tk st = action j j tk (HappyState action) ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist (1) tk old_st _ stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (i) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (HappyState (action)) sts stk = -- trace "entering error recovery" $ action (1) (1) tk (HappyState (action)) sts ((HappyErrorToken (i)) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/data/HappyTemplate-arrays0000644000000000000000000002053307346545000016157 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ data Happy_IntList = HappyCons Prelude.Int Happy_IntList infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept (0) tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = {- nothing -} case action of (0) -> {- nothing -} happyFail (happyExpListPerState ((st) :: Prelude.Int)) i tk st (-1) -> {- nothing -} happyAccept i tk st n | (n Prelude.< ((0) :: Prelude.Int)) -> {- nothing -} (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = ((Prelude.negate ((n Prelude.+ ((1) :: Prelude.Int))))) n -> {- nothing -} happyShift new_state i tk st where new_state = (n Prelude.- ((1) :: Prelude.Int)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = (off Prelude.+ i) check = if (off_i Prelude.>= ((0) :: Prelude.Int)) then (indexShortOffAddr happyCheck off_i Prelude.== i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st indexShortOffAddr arr off = arr Happy_Data_Array.! off {-# INLINE happyLt #-} happyLt x y = (x Prelude.< y) readArrayBit arr bit = Bits.testBit (indexShortOffAddr arr (bit `Prelude.div` 16)) (bit `Prelude.mod` 16) ----------------------------------------------------------------------------- -- HappyState data type (not arrays) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state (0) tk st sts stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (i) -> i }) in -- trace "shifting the error token" $ happyDoAction i tk new_state (HappyCons (st) (sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons (st) (sts)) ((HappyTerminal (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happySpecReduce_0 nt fn j tk st@((action)) sts stk = happyGoto nt j tk st (HappyCons (st) (sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happySpecReduce_1 nt fn j tk _ sts@((HappyCons (st@(action)) (_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happySpecReduce_2 nt fn j tk _ (HappyCons (_) (sts@((HappyCons (st@(action)) (_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happySpecReduce_3 nt fn j tk _ (HappyCons (_) ((HappyCons (_) (sts@((HappyCons (st@(action)) (_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Prelude.- ((1) :: Prelude.Int)) sts of sts1@((HappyCons (st1@(action)) (_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = (off Prelude.+ nt) new_state = indexShortOffAddr happyTable off_i in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop (0) l = l happyDrop n (HappyCons (_) (t)) = happyDrop (n Prelude.- ((1) :: Prelude.Int)) t happyDropStk (0) l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Prelude.- ((1)::Prelude.Int)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = {- nothing -} happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = (off Prelude.+ nt) new_state = indexShortOffAddr happyTable off_i ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist (0) tk old_st _ stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (i) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (action) sts stk = -- trace "entering error recovery" $ happyDoAction (0) tk action sts ((HappyErrorToken (i)) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/data/HappyTemplate-arrays-coerce0000644000000000000000000002330707346545000017417 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) #else #define LT(n,m) (n Happy_GHC_Exts.<# m) #define GTE(n,m) (n Happy_GHC_Exts.>=# m) #define EQ(n,m) (n Happy_GHC_Exts.==# m) #endif data Happy_IntList = HappyCons Happy_GHC_Exts.Int# Happy_IntList infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept 0# tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = {- nothing -} case action of 0# -> {- nothing -} happyFail (happyExpListPerState ((Happy_GHC_Exts.I# (st)) :: Prelude.Int)) i tk st -1# -> {- nothing -} happyAccept i tk st n | LT(n,(0# :: Happy_GHC_Exts.Int#)) -> {- nothing -} (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = (Happy_GHC_Exts.I# ((Happy_GHC_Exts.negateInt# ((n Happy_GHC_Exts.+# (1# :: Happy_GHC_Exts.Int#)))))) n -> {- nothing -} happyShift new_state i tk st where new_state = (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = (off Happy_GHC_Exts.+# i) check = if GTE(off_i,(0# :: Happy_GHC_Exts.Int#)) then EQ(indexShortOffAddr happyCheck off_i, i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# {-# INLINE happyLt #-} happyLt x y = LT(x,y) readArrayBit arr bit = Bits.testBit (Happy_GHC_Exts.I# (indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#))) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- HappyState data type (not arrays) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state 0# tk st sts stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "shifting the error token" $ happyDoAction i tk new_state (HappyCons (st) (sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons (st) (sts)) ((happyInTok (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_0 nt fn j tk st@((action)) sts stk = happyGoto nt j tk st (HappyCons (st) (sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_1 nt fn j tk _ sts@((HappyCons (st@(action)) (_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_2 nt fn j tk _ (HappyCons (_) (sts@((HappyCons (st@(action)) (_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_3 nt fn j tk _ (HappyCons (_) ((HappyCons (_) (sts@((HappyCons (st@(action)) (_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) sts of sts1@((HappyCons (st1@(action)) (_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons (_) (t)) = happyDrop (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Happy_GHC_Exts.-# (1#::Happy_GHC_Exts.Int#)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = {- nothing -} happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist 0# tk old_st _ stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (action) sts stk = -- trace "entering error recovery" $ happyDoAction 0# tk action sts ((Happy_GHC_Exts.unsafeCoerce# (Happy_GHC_Exts.I# (i))) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/data/HappyTemplate-arrays-coerce-debug0000644000000000000000000002453707346545000020511 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) #else #define LT(n,m) (n Happy_GHC_Exts.<# m) #define GTE(n,m) (n Happy_GHC_Exts.>=# m) #define EQ(n,m) (n Happy_GHC_Exts.==# m) #endif data Happy_IntList = HappyCons Happy_GHC_Exts.Int# Happy_IntList happyTrace string expr = Happy_System_IO_Unsafe.unsafePerformIO $ do Happy_System_IO.hPutStr Happy_System_IO.stderr string return expr infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept 0# tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = (happyTrace ("state: " ++ show (Happy_GHC_Exts.I# (st)) ++ ",\ttoken: " ++ show (Happy_GHC_Exts.I# (i)) ++ ",\taction: ")) $ case action of 0# -> (happyTrace ("fail.\n")) $ happyFail (happyExpListPerState ((Happy_GHC_Exts.I# (st)) :: Prelude.Int)) i tk st -1# -> (happyTrace ("accept.\n")) $ happyAccept i tk st n | LT(n,(0# :: Happy_GHC_Exts.Int#)) -> (happyTrace ("reduce (rule " ++ show rule ++ ")")) $ (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = (Happy_GHC_Exts.I# ((Happy_GHC_Exts.negateInt# ((n Happy_GHC_Exts.+# (1# :: Happy_GHC_Exts.Int#)))))) n -> (happyTrace ("shift, enter state " ++ show (Happy_GHC_Exts.I# (new_state)) ++ "\n")) $ happyShift new_state i tk st where new_state = (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = (off Happy_GHC_Exts.+# i) check = if GTE(off_i,(0# :: Happy_GHC_Exts.Int#)) then EQ(indexShortOffAddr happyCheck off_i, i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# {-# INLINE happyLt #-} happyLt x y = LT(x,y) readArrayBit arr bit = Bits.testBit (Happy_GHC_Exts.I# (indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#))) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- HappyState data type (not arrays) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state 0# tk st sts stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "shifting the error token" $ happyDoAction i tk new_state (HappyCons (st) (sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons (st) (sts)) ((happyInTok (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_0 nt fn j tk st@((action)) sts stk = happyGoto nt j tk st (HappyCons (st) (sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_1 nt fn j tk _ sts@((HappyCons (st@(action)) (_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_2 nt fn j tk _ (HappyCons (_) (sts@((HappyCons (st@(action)) (_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_3 nt fn j tk _ (HappyCons (_) ((HappyCons (_) (sts@((HappyCons (st@(action)) (_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) sts of sts1@((HappyCons (st1@(action)) (_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons (_) (t)) = happyDrop (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Happy_GHC_Exts.-# (1#::Happy_GHC_Exts.Int#)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = (happyTrace (", goto state " ++ show (Happy_GHC_Exts.I# (new_state)) ++ "\n")) $ happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist 0# tk old_st _ stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (action) sts stk = -- trace "entering error recovery" $ happyDoAction 0# tk action sts ((Happy_GHC_Exts.unsafeCoerce# (Happy_GHC_Exts.I# (i))) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/data/HappyTemplate-arrays-debug0000644000000000000000000002164307346545000017246 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ data Happy_IntList = HappyCons Prelude.Int Happy_IntList happyTrace string expr = Happy_System_IO_Unsafe.unsafePerformIO $ do Happy_System_IO.hPutStr Happy_System_IO.stderr string return expr infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept (0) tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = (happyTrace ("state: " ++ show (st) ++ ",\ttoken: " ++ show (i) ++ ",\taction: ")) $ case action of (0) -> (happyTrace ("fail.\n")) $ happyFail (happyExpListPerState ((st) :: Prelude.Int)) i tk st (-1) -> (happyTrace ("accept.\n")) $ happyAccept i tk st n | (n Prelude.< ((0) :: Prelude.Int)) -> (happyTrace ("reduce (rule " ++ show rule ++ ")")) $ (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = ((Prelude.negate ((n Prelude.+ ((1) :: Prelude.Int))))) n -> (happyTrace ("shift, enter state " ++ show (new_state) ++ "\n")) $ happyShift new_state i tk st where new_state = (n Prelude.- ((1) :: Prelude.Int)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = (off Prelude.+ i) check = if (off_i Prelude.>= ((0) :: Prelude.Int)) then (indexShortOffAddr happyCheck off_i Prelude.== i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st indexShortOffAddr arr off = arr Happy_Data_Array.! off {-# INLINE happyLt #-} happyLt x y = (x Prelude.< y) readArrayBit arr bit = Bits.testBit (indexShortOffAddr arr (bit `Prelude.div` 16)) (bit `Prelude.mod` 16) ----------------------------------------------------------------------------- -- HappyState data type (not arrays) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state (0) tk st sts stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (i) -> i }) in -- trace "shifting the error token" $ happyDoAction i tk new_state (HappyCons (st) (sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons (st) (sts)) ((HappyTerminal (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happySpecReduce_0 nt fn j tk st@((action)) sts stk = happyGoto nt j tk st (HappyCons (st) (sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happySpecReduce_1 nt fn j tk _ sts@((HappyCons (st@(action)) (_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happySpecReduce_2 nt fn j tk _ (HappyCons (_) (sts@((HappyCons (st@(action)) (_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happySpecReduce_3 nt fn j tk _ (HappyCons (_) ((HappyCons (_) (sts@((HappyCons (st@(action)) (_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Prelude.- ((1) :: Prelude.Int)) sts of sts1@((HappyCons (st1@(action)) (_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn (0) tk st sts stk = happyFail [] (0) tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = (off Prelude.+ nt) new_state = indexShortOffAddr happyTable off_i in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop (0) l = l happyDrop n (HappyCons (_) (t)) = happyDrop (n Prelude.- ((1) :: Prelude.Int)) t happyDropStk (0) l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Prelude.- ((1)::Prelude.Int)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = (happyTrace (", goto state " ++ show (new_state) ++ "\n")) $ happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = (off Prelude.+ nt) new_state = indexShortOffAddr happyTable off_i ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist (0) tk old_st _ stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (i) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (action) sts stk = -- trace "entering error recovery" $ happyDoAction (0) tk action sts ((HappyErrorToken (i)) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/data/HappyTemplate-arrays-ghc0000644000000000000000000002324307346545000016717 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) #else #define LT(n,m) (n Happy_GHC_Exts.<# m) #define GTE(n,m) (n Happy_GHC_Exts.>=# m) #define EQ(n,m) (n Happy_GHC_Exts.==# m) #endif data Happy_IntList = HappyCons Happy_GHC_Exts.Int# Happy_IntList infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept 0# tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = {- nothing -} case action of 0# -> {- nothing -} happyFail (happyExpListPerState ((Happy_GHC_Exts.I# (st)) :: Prelude.Int)) i tk st -1# -> {- nothing -} happyAccept i tk st n | LT(n,(0# :: Happy_GHC_Exts.Int#)) -> {- nothing -} (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = (Happy_GHC_Exts.I# ((Happy_GHC_Exts.negateInt# ((n Happy_GHC_Exts.+# (1# :: Happy_GHC_Exts.Int#)))))) n -> {- nothing -} happyShift new_state i tk st where new_state = (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = (off Happy_GHC_Exts.+# i) check = if GTE(off_i,(0# :: Happy_GHC_Exts.Int#)) then EQ(indexShortOffAddr happyCheck off_i, i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# {-# INLINE happyLt #-} happyLt x y = LT(x,y) readArrayBit arr bit = Bits.testBit (Happy_GHC_Exts.I# (indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#))) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- HappyState data type (not arrays) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state 0# tk st sts stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "shifting the error token" $ happyDoAction i tk new_state (HappyCons (st) (sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons (st) (sts)) ((HappyTerminal (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_0 nt fn j tk st@((action)) sts stk = happyGoto nt j tk st (HappyCons (st) (sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_1 nt fn j tk _ sts@((HappyCons (st@(action)) (_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_2 nt fn j tk _ (HappyCons (_) (sts@((HappyCons (st@(action)) (_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_3 nt fn j tk _ (HappyCons (_) ((HappyCons (_) (sts@((HappyCons (st@(action)) (_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) sts of sts1@((HappyCons (st1@(action)) (_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons (_) (t)) = happyDrop (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Happy_GHC_Exts.-# (1#::Happy_GHC_Exts.Int#)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = {- nothing -} happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist 0# tk old_st _ stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (action) sts stk = -- trace "entering error recovery" $ happyDoAction 0# tk action sts ((HappyErrorToken (Happy_GHC_Exts.I# (i))) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/data/HappyTemplate-arrays-ghc-debug0000644000000000000000000002447307346545000020011 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) #else #define LT(n,m) (n Happy_GHC_Exts.<# m) #define GTE(n,m) (n Happy_GHC_Exts.>=# m) #define EQ(n,m) (n Happy_GHC_Exts.==# m) #endif data Happy_IntList = HappyCons Happy_GHC_Exts.Int# Happy_IntList happyTrace string expr = Happy_System_IO_Unsafe.unsafePerformIO $ do Happy_System_IO.hPutStr Happy_System_IO.stderr string return expr infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept 0# tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = (happyTrace ("state: " ++ show (Happy_GHC_Exts.I# (st)) ++ ",\ttoken: " ++ show (Happy_GHC_Exts.I# (i)) ++ ",\taction: ")) $ case action of 0# -> (happyTrace ("fail.\n")) $ happyFail (happyExpListPerState ((Happy_GHC_Exts.I# (st)) :: Prelude.Int)) i tk st -1# -> (happyTrace ("accept.\n")) $ happyAccept i tk st n | LT(n,(0# :: Happy_GHC_Exts.Int#)) -> (happyTrace ("reduce (rule " ++ show rule ++ ")")) $ (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = (Happy_GHC_Exts.I# ((Happy_GHC_Exts.negateInt# ((n Happy_GHC_Exts.+# (1# :: Happy_GHC_Exts.Int#)))))) n -> (happyTrace ("shift, enter state " ++ show (Happy_GHC_Exts.I# (new_state)) ++ "\n")) $ happyShift new_state i tk st where new_state = (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = (off Happy_GHC_Exts.+# i) check = if GTE(off_i,(0# :: Happy_GHC_Exts.Int#)) then EQ(indexShortOffAddr happyCheck off_i, i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# {-# INLINE happyLt #-} happyLt x y = LT(x,y) readArrayBit arr bit = Bits.testBit (Happy_GHC_Exts.I# (indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#))) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- HappyState data type (not arrays) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state 0# tk st sts stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "shifting the error token" $ happyDoAction i tk new_state (HappyCons (st) (sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons (st) (sts)) ((HappyTerminal (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_0 nt fn j tk st@((action)) sts stk = happyGoto nt j tk st (HappyCons (st) (sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_1 nt fn j tk _ sts@((HappyCons (st@(action)) (_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_2 nt fn j tk _ (HappyCons (_) (sts@((HappyCons (st@(action)) (_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_3 nt fn j tk _ (HappyCons (_) ((HappyCons (_) (sts@((HappyCons (st@(action)) (_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) sts of sts1@((HappyCons (st1@(action)) (_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons (_) (t)) = happyDrop (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Happy_GHC_Exts.-# (1#::Happy_GHC_Exts.Int#)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = (happyTrace (", goto state " ++ show (Happy_GHC_Exts.I# (new_state)) ++ "\n")) $ happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist 0# tk old_st _ stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (action) sts stk = -- trace "entering error recovery" $ happyDoAction 0# tk action sts ((HappyErrorToken (Happy_GHC_Exts.I# (i))) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/data/HappyTemplate-coerce0000644000000000000000000002042207346545000016113 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) #else #define LT(n,m) (n Happy_GHC_Exts.<# m) #define GTE(n,m) (n Happy_GHC_Exts.>=# m) #define EQ(n,m) (n Happy_GHC_Exts.==# m) #endif data Happy_IntList = HappyCons Happy_GHC_Exts.Int# Happy_IntList infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept 1# tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j ) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# {-# INLINE happyLt #-} happyLt x y = LT(x,y) readArrayBit arr bit = Bits.testBit (Happy_GHC_Exts.I# (indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#))) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- HappyState data type (not arrays) newtype HappyState b c = HappyState (Happy_GHC_Exts.Int# -> -- token number Happy_GHC_Exts.Int# -> -- token number (yes, again) b -> -- token semantic value HappyState b c -> -- current state [HappyState b c] -> -- state stack c) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state 1# tk st sts stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "shifting the error token" $ new_state i i tk (HappyState (new_state)) ((st):(sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state ((st):(sts)) ((happyInTok (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happySpecReduce_0 nt fn j tk st@((HappyState (action))) sts stk = action nt j tk st ((st):(sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happySpecReduce_1 nt fn j tk _ sts@(((st@(HappyState (action))):(_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happySpecReduce_2 nt fn j tk _ ((_):(sts@(((st@(HappyState (action))):(_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happySpecReduce_3 nt fn j tk _ ((_):(((_):(sts@(((st@(HappyState (action))):(_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) sts of sts1@(((st1@(HappyState (action))):(_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (action nt j tk st1 sts1 r) happyMonadReduce k nt fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k ((st):(sts)) of sts1@(((st1@(HappyState (action))):(_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> action nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k ((st):(sts)) of sts1@(((st1@(HappyState (action))):(_))) -> let drop_stk = happyDropStk k stk _ = nt :: Happy_GHC_Exts.Int# new_state = action in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n ((_):(t)) = happyDrop (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Happy_GHC_Exts.-# (1#::Happy_GHC_Exts.Int#)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto action j tk st = action j j tk (HappyState action) ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist 1# tk old_st _ stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (HappyState (action)) sts stk = -- trace "entering error recovery" $ action 1# 1# tk (HappyState (action)) sts ((Happy_GHC_Exts.unsafeCoerce# (Happy_GHC_Exts.I# (i))) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/data/HappyTemplate-ghc0000644000000000000000000002035607346545000015422 0ustar0000000000000000{-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) #else #define LT(n,m) (n Happy_GHC_Exts.<# m) #define GTE(n,m) (n Happy_GHC_Exts.>=# m) #define EQ(n,m) (n Happy_GHC_Exts.==# m) #endif data Happy_IntList = HappyCons Happy_GHC_Exts.Int# Happy_IntList infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept 1# tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j ) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# {-# INLINE happyLt #-} happyLt x y = LT(x,y) readArrayBit arr bit = Bits.testBit (Happy_GHC_Exts.I# (indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#))) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- HappyState data type (not arrays) newtype HappyState b c = HappyState (Happy_GHC_Exts.Int# -> -- token number Happy_GHC_Exts.Int# -> -- token number (yes, again) b -> -- token semantic value HappyState b c -> -- current state [HappyState b c] -> -- state stack c) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state 1# tk st sts stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "shifting the error token" $ new_state i i tk (HappyState (new_state)) ((st):(sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state ((st):(sts)) ((HappyTerminal (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happySpecReduce_0 nt fn j tk st@((HappyState (action))) sts stk = action nt j tk st ((st):(sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happySpecReduce_1 nt fn j tk _ sts@(((st@(HappyState (action))):(_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happySpecReduce_2 nt fn j tk _ ((_):(sts@(((st@(HappyState (action))):(_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happySpecReduce_3 nt fn j tk _ ((_):(((_):(sts@(((st@(HappyState (action))):(_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (action nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) sts of sts1@(((st1@(HappyState (action))):(_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (action nt j tk st1 sts1 r) happyMonadReduce k nt fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k ((st):(sts)) of sts1@(((st1@(HappyState (action))):(_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> action nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn 1# tk st sts stk = happyFail [] 1# tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k ((st):(sts)) of sts1@(((st1@(HappyState (action))):(_))) -> let drop_stk = happyDropStk k stk _ = nt :: Happy_GHC_Exts.Int# new_state = action in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n ((_):(t)) = happyDrop (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Happy_GHC_Exts.-# (1#::Happy_GHC_Exts.Int#)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto action j tk st = action j j tk (HappyState action) ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist 1# tk old_st _ stk@(x `HappyStk` _) = let i = (case x of { HappyErrorToken (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (HappyState (action)) sts stk = -- trace "entering error recovery" $ action 1# 1# tk (HappyState (action)) sts ((HappyErrorToken (Happy_GHC_Exts.I# (i))) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/doc/0000755000000000000000000000000007346545000012011 5ustar0000000000000000happy-1.20.1.1/doc/Makefile0000644000000000000000000000012307346545000013445 0ustar0000000000000000include config.mk XML_DOC = happy INSTALL_XML_DOC = happy include docbook-xml.mk happy-1.20.1.1/doc/aclocal.m40000644000000000000000000001261407346545000013655 0ustar0000000000000000# FP_GEN_DOCBOOK_XML # ------------------ # Generates a DocBook XML V4.2 document in conftest.xml. AC_DEFUN([FP_GEN_DOCBOOK_XML], [rm -f conftest.xml cat > conftest.xml << EOF A DocBook Test Document A Chapter Title This is a paragraph, referencing . Another Chapter Title This is another paragraph, referencing . EOF ]) # FP_GEN_DOCBOOK_XML # FP_PROG_XSLTPROC # ---------------- # Sets the output variable XsltprocCmd to the full path of the XSLT processor # xsltproc. XsltprocCmd is empty if xsltproc could not be found. AC_DEFUN([FP_PROG_XSLTPROC], [AC_PATH_PROG([XsltprocCmd], [xsltproc]) if test -z "$XsltprocCmd"; then AC_MSG_WARN([cannot find xsltproc in your PATH, you will not be able to build the documentation]) fi ])# FP_PROG_XSLTPROC # FP_DIR_DOCBOOK_XSL(XSL-DIRS) # ---------------------------- # Check which of the directories XSL-DIRS contains DocBook XSL stylesheets. The # output variable DIR_DOCBOOK_XSL will contain the first usable directory or # will be empty if none could be found. AC_DEFUN([FP_DIR_DOCBOOK_XSL], [AC_REQUIRE([FP_PROG_XSLTPROC])dnl if test -n "$XsltprocCmd"; then AC_CACHE_CHECK([for DocBook XSL stylesheet directory], fp_cv_dir_docbook_xsl, [FP_GEN_DOCBOOK_XML fp_cv_dir_docbook_xsl=no for fp_var in $1; do if $XsltprocCmd ${fp_var}/html/docbook.xsl conftest.xml > /dev/null 2>&1; then fp_cv_dir_docbook_xsl=$fp_var break fi done rm -rf conftest*]) fi if test x"$fp_cv_dir_docbook_xsl" = xno; then AC_MSG_WARN([cannot find DocBook XSL stylesheets, you will not be able to build the documentation]) DIR_DOCBOOK_XSL= else DIR_DOCBOOK_XSL=$fp_cv_dir_docbook_xsl fi AC_SUBST([DIR_DOCBOOK_XSL]) ])# FP_DIR_DOCBOOK_XSL # FP_PROG_XMLLINT # ---------------- # Sets the output variable XmllintCmd to the full path of the XSLT processor # xmllint. XmllintCmd is empty if xmllint could not be found. AC_DEFUN([FP_PROG_XMLLINT], [AC_PATH_PROG([XmllintCmd], [xmllint]) if test -z "$XmllintCmd"; then AC_MSG_WARN([cannot find xmllint in your PATH, you will not be able to validate your documentation]) fi ])# FP_PROG_XMLLINT # FP_CHECK_DOCBOOK_DTD # -------------------- AC_DEFUN([FP_CHECK_DOCBOOK_DTD], [AC_REQUIRE([FP_PROG_XMLLINT])dnl if test -n "$XmllintCmd"; then AC_MSG_CHECKING([for DocBook DTD]) FP_GEN_DOCBOOK_XML if $XmllintCmd --valid --noout conftest.xml > /dev/null 2>&1; then AC_MSG_RESULT([ok]) else AC_MSG_RESULT([failed]) AC_MSG_WARN([cannot find a DTD for DocBook XML V4.2, you will not be able to validate your documentation]) AC_MSG_WARN([check your XML_CATALOG_FILES environment variable and/or /etc/xml/catalog]) fi rm -rf conftest* fi ])# FP_CHECK_DOCBOOK_DTD # FP_GEN_FO # ------------------ # Generates a formatting objects document in conftest.fo. AC_DEFUN([FP_GEN_FO], [rm -f conftest.fo cat > conftest.fo << EOF Test! EOF ]) # FP_GEN_FO # FP_PROG_FOP # ----------- # Set the output variable 'FopCmd' to the first working 'fop' in the current # 'PATH'. Note that /usr/bin/fop is broken in SuSE 9.1 (unpatched), so try # /usr/share/fop/fop.sh in that case (or no 'fop'), too. AC_DEFUN([FP_PROG_FOP], [AC_PATH_PROGS([FopCmd1], [fop]) if test -n "$FopCmd1"; then AC_CACHE_CHECK([for $FopCmd1 usability], [fp_cv_fop_usability], [FP_GEN_FO if "$FopCmd1" -fo conftest.fo -ps conftest.ps > /dev/null 2>&1; then fp_cv_fop_usability=yes else fp_cv_fop_usability=no fi rm -rf conftest*]) if test x"$fp_cv_fop_usability" = xyes; then FopCmd=$FopCmd1 fi fi if test -z "$FopCmd"; then AC_PATH_PROGS([FopCmd2], [fop.sh], , [/usr/share/fop]) FopCmd=$FopCmd2 fi AC_SUBST([FopCmd]) ])# FP_PROG_FOP # FP_PROG_FO_PROCESSOR # -------------------- # Try to find an FO processor. PassiveTeX output is sometimes a bit strange, so # try FOP first. Sets the output variables FopCmd, XmltexCmd, DvipsCmd, and # PdfxmltexCmd. AC_DEFUN([FP_PROG_FO_PROCESSOR], [AC_REQUIRE([FP_PROG_FOP]) AC_PATH_PROG([XmltexCmd], [xmltex]) AC_PATH_PROG([DvipsCmd], [dvips]) if test -z "$FopCmd"; then if test -z "$XmltexCmd"; then AC_MSG_WARN([cannot find an FO => DVI converter, you will not be able to build DVI or PostScript documentation]) else if test -z "$DvipsCmd"; then AC_MSG_WARN([cannot find a DVI => PS converter, you will not be able to build PostScript documentation]) fi fi AC_PATH_PROG([PdfxmltexCmd], [pdfxmltex]) if test -z "$PdfxmltexCmd"; then AC_MSG_WARN([cannot find an FO => PDF converter, you will not be able to build PDF documentation]) fi elif test -z "$XmltexCmd"; then AC_MSG_WARN([cannot find an FO => DVI converter, you will not be able to build DVI documentation]) fi ])# FP_PROG_FO_PROCESSOR happy-1.20.1.1/doc/config.mk.in0000644000000000000000000000065407346545000014221 0ustar0000000000000000#----------------------------------------------------------------------------- # DocBook XML stuff XSLTPROC = @XsltprocCmd@ XMLLINT = @XmllintCmd@ FOP = @FopCmd@ XMLTEX = @XmltexCmd@ DBLATEX = @DbLatexCmd@ DIR_DOCBOOK_XSL = @DIR_DOCBOOK_XSL@ XSLTPROC_LABEL_OPTS = --stringparam toc.section.depth 3 \ --stringparam section.autolabel 1 \ --stringparam section.label.includes.component.label 1 happy-1.20.1.1/doc/configure.ac0000644000000000000000000000116107346545000014276 0ustar0000000000000000 AC_INIT([Haddock docs], [1.0], [simonmar@microsoft.com], []) AC_CONFIG_SRCDIR([Makefile]) dnl ** check for DocBook toolchain FP_CHECK_DOCBOOK_DTD FP_DIR_DOCBOOK_XSL([/usr/share/xml/docbook/stylesheet/nwalsh/current /usr/share/xml/docbook/stylesheet/nwalsh /usr/share/sgml/docbook/docbook-xsl-stylesheets* /usr/share/sgml/docbook/xsl-stylesheets* /opt/kde?/share/apps/ksgmltools2/docbook/xsl /usr/share/docbook-xsl /usr/share/sgml/docbkxsl /usr/local/share/xsl/docbook /sw/share/xml/xsl/docbook-xsl /usr/share/xml/docbook/xsl-stylesheets*]) AC_PATH_PROG(DbLatexCmd,dblatex) AC_CONFIG_FILES([config.mk happy.1]) AC_OUTPUT happy-1.20.1.1/doc/docbook-xml.mk0000644000000000000000000000633207346545000014564 0ustar0000000000000000#----------------------------------------------------------------------------- # DocBook XML .PHONY: html html-no-chunks chm HxS fo dvi ps pdf ifneq "$(XML_DOC)" "" all :: html # multi-file XML document: main document name is specified in $(XML_DOC), # sub-documents (.xml files) listed in $(XML_SRCS). ifeq "$(XML_SRCS)" "" XML_SRCS = $(wildcard *.xml) endif XML_HTML = $(addsuffix /index.html,$(basename $(XML_DOC))) XML_HTML_NO_CHUNKS = $(addsuffix .html,$(XML_DOC)) XML_CHM = $(addsuffix .chm,$(XML_DOC)) XML_HxS = $(addsuffix .HxS,$(XML_DOC)) XML_DVI = $(addsuffix .dvi,$(XML_DOC)) XML_PS = $(addsuffix .ps,$(XML_DOC)) XML_PDF = $(addsuffix .pdf,$(XML_DOC)) $(XML_HTML) $(XML_NO_CHUNKS_HTML) $(XML_FO) $(XML_DVI) $(XML_PS) $(XML_PDF) :: $(XML_SRCS) html :: $(XML_HTML) html-no-chunks :: $(XML_HTML_NO_CHUNKS) chm :: $(XML_CHM) HxS :: $(XML_HxS) dvi :: $(XML_DVI) ps :: $(XML_PS) pdf :: $(XML_PDF) CLEAN_FILES += $(XML_HTML_NO_CHUNKS) $(XML_DVI) $(XML_PS) $(XML_PDF) FPTOOLS_CSS = fptools.css clean :: $(RM) -rf $(XML_DOC).out $(basename $(XML_DOC)) $(basename $(XML_DOC))-htmlhelp $(XML_DOC).pdf $(XML_DOC).dvi $(XML_DOC).ps validate :: $(XMLLINT) --valid --noout $(XMLLINT_OPTS) $(XML_DOC).xml endif #----------------------------------------------------------------------------- # DocBook XML suffix rules # %.html : %.xml $(XSLTPROC) --output $@ \ --stringparam html.stylesheet $(FPTOOLS_CSS) \ $(XSLTPROC_LABEL_OPTS) $(XSLTPROC_OPTS) \ $(DIR_DOCBOOK_XSL)/html/docbook.xsl $< %/index.html : %.xml $(RM) -rf $(dir $@) $(XSLTPROC) --stringparam base.dir $(dir $@) \ --stringparam use.id.as.filename 1 \ --stringparam html.stylesheet $(FPTOOLS_CSS) \ $(XSLTPROC_LABEL_OPTS) $(XSLTPROC_OPTS) \ $(DIR_DOCBOOK_XSL)/html/chunk.xsl $< cp $(FPTOOLS_CSS) $(dir $@) # Note: Numeric labeling seems to be uncommon for HTML Help %-htmlhelp/index.html : %.xml $(RM) -rf $(dir $@) $(XSLTPROC) --stringparam base.dir $(dir $@) \ --stringparam manifest.in.base.dir 1 \ --stringparam htmlhelp.chm "..\\"$(basename $<).chm \ $(XSLTPROC_OPTS) \ $(DIR_DOCBOOK_XSL)/htmlhelp/htmlhelp.xsl $< %-htmlhelp2/collection.HxC : %.xml $(RM) -rf $(dir $@) $(XSLTPROC) --stringparam base.dir $(dir $@) \ --stringparam use.id.as.filename 1 \ --stringparam manifest.in.base.dir 1 \ $(XSLTPROC_OPTS) \ $(DIR_DOCBOOK_XSL)/htmlhelp2/htmlhelp2.xsl $< # TODO: Detect hhc & Hxcomp via autoconf # # Two obstacles here: # # * The reason for the strange "if" below is that hhc returns 0 on error and 1 # on success, the opposite of what shells and make expect. # # * There seems to be some trouble with DocBook indices, but the *.chm looks OK, # anyway, therefore we pacify make by "|| true". Ugly... # %.chm : %-htmlhelp/index.html ( cd $(dir $<) && if hhc htmlhelp.hhp ; then false ; else true ; fi ) || true %.HxS : %-htmlhelp2/collection.HxC ( cd $(dir $<) && if Hxcomp -p collection.HxC -o ../$@ ; then false ; else true ; fi ) ifneq "$(DBLATEX)" "" %.pdf : %.xml $(DBLATEX) -tpdf $< %.dvi : %.xml $(DBLATEX) -tdvi $< %.ps : %.xml $(DBLATEX) -tps $< endif happy-1.20.1.1/doc/fptools.css0000644000000000000000000000142407346545000014212 0ustar0000000000000000div { font-family: sans-serif; color: black; background: white } h1, h2, h3, h4, h5, h6, p.title { color: #005A9C } h1 { font: 170% sans-serif } h2 { font: 140% sans-serif } h3 { font: 120% sans-serif } h4 { font: bold 100% sans-serif } h5 { font: italic 100% sans-serif } h6 { font: small-caps 100% sans-serif } pre { font-family: monospace; border-width: 1px; border-style: solid; padding: 0.3em } pre.screen { color: #006400 } pre.programlisting { color: maroon } div.example { background-color: #fffcf5; margin: 1ex 0em; border: solid #412e25 1px; padding: 0ex 0.4em } a:link { color: #0000C8 } a:hover { background: #FFFFA8 } a:active { color: #D00000 } a:visited { color: #680098 } happy-1.20.1.1/doc/happy.1.in0000644000000000000000000001175107346545000013626 0ustar0000000000000000.TH HAPPY 1 "2000-12-23" "Glasgow FP Suite" "Happy Parser Generator" .SH NAME happy \- the parser generator for Haskell .SH SYNOPSIS .B happy [\fIOPTION\fR]... \fIfile\fR [\fIOPTION\fR]... .SH DESCRIPTION This manual page documents briefly the .BR happy command. .PP This manual page was written for the Debian GNU/Linux distribution because the original program does not have a manual page. Instead, it has documentation in various other formats, including DVI, Info and HTML; see below. .PP .B Happy is a parser generator system for Haskell. `HAPPY' is a dyslexic acronym for `A Yacc-like Haskell Parser generator'. .PP There are two types of grammar files, .IR file.y " and " file.ly , with the latter observing the reverse comment bird track convention (i.e. each code line must begin with `>'). The examples distributed with .B Happy are all of the .I .ly form. .PP Caveat: When using .I hbc (Chalmers Haskell) the command argument structure is slightly different. This is because the hbc run time system takes some flags as its own (for setting things like the heap size, etc). This problem can be circumvented by adding a single dash (`-') to your command line. So when using a hbc generated version of Happy, the argument structure is: .B happy \- [\fIOPTION\fR]... \fIfile\fR [\fIOPTION\fR]... .SH OPTIONS The programs follow the usual GNU command line syntax, with long options starting with two dashes (`--'). A summary of options is included below. For a complete description, see the other documentation. .TP .BR \-h ", " \-\-help Show summary of options. .TP .BR \-v ", " \-\-version Print version information on standard output then exit successfully. .TP .BR \-a ", " \-\-array Instructs Happy to generate a parser using an array-based shift reduce parser. When used in conjunction with \fB\-g\fR, the arrays will be encoded as strings, resulting in faster parsers. Without \fB\-g\fR, standard Haskell arrays will be used. .TP .BR \-g ", " \-\-ghc Instructs Happy to generate a parser that uses GHC-specific extensions to obtain faster code. .TP .BR \-c ", " \-\-coerce Use GHC's .B unsafeCoerce# extension to generate smaller faster parsers. One drawback is that some type safety is lost, which means that a parser generated with .B \-c may compile fine but crash at run-time. Be sure to compile your grammar without .B \-c first to ensure it is type-correct. This option has quite a significant effect on the performance of the resulting parser, but remember that parsers generated this way can only be compiled by GHC\ 3.02 and above. This option may only be used in conjunction with .BR \-g . .TP .BR \-d ", " \-\-debug Generate a parser that will print debugging information to .I stderr at run-time, including all the shifts, reductions, state transitions and token inputs performed by the parser. This option may only be used in conjunction with .BR \-a . .TP \fB\-i\fR [\fIFILE\fR], \fB\-\-info\fR[=\fIFILE\fR] Directs Happy to produce an info file containing detailed information about the grammar, parser states, parser actions, and conflicts. Info files are vital during the debugging of grammars. The filename argument is optional, and if omitted the info file will be written to .I FILE.info (where .I FILE is the input file name with any extension removed). .TP \fB\-o\fR \fIFILE\fR, \fB\-\-outfile=\fIFILE Specifies the destination of the generated parser module. If omitted, the parser will be placed in .IR FILE.hs ", where " FILE is the name of the input file with any extension removed. If .I FILE is .B - the generated parser is sent to the standard output. .TP \fB\-m\fR \fINAME\fR, \fB\-\-magic-name=\fINAME Happy prefixes all the symbols it uses internally with either .BR happy " or " Happy . To use a different string, for example if the use of .B happy is conflicting with one of your own functions, specify the prefix using the .B \-m option. .TP \fB\-t\fR \fIDIR\fR, \fB\-\-template=\fIDIR Instructs Happy to use this directory when looking for template files: these files contain the static code that Happy includes in every generated parser. You shouldn't need to use this option if Happy is properly configured for your computer. .TP \fB\-l\fR, \fB\-\-glr\fI Instructs Happy to output a GLR parser instead of an LALR(1) parser. .TP \fB\-k\fR, \fB\-\-decode\fI Causes the GLR parser to generate code for decoding the parse forest to a list of semantic results (requires \fB\--ghc\fR). .TP \fB\-f\fR, \fB\-\-filter\fI Causes the GLR parser to filter out nodes which aren't required for the semantic results (an experimental optimisation, requires \fB\--ghc\fR). .SH FILES .I @LIBDIR@ .SH "SEE ALSO" .BR @DOCDIR@ , the Happy homepage .UR http://haskell.org/happy/ (http://haskell.org/happy/) .UE .SH COPYRIGHT Happy Version @VERSION@ Copyright (c) 1993-1996 Andy Gill, Simon Marlow; (c) 1997-2001 Simon Marlow .SH AUTHOR This manual page was written by Michael Weber , for the Debian GNU/Linux system (but may be used by others). .\" Local variables: .\" mode: nroff .\" End: happy-1.20.1.1/doc/happy.xml0000644000000000000000000050363007346545000013663 0ustar0000000000000000 2001-4-27 Happy User Guide Simon Marlow Andy Gill
simonmar@microsoft.com
1997-2009 Simon Marlow This document describes Happy, the Haskell Parser Generator, version 1.18.
Introduction Happy is a parser generator system for Haskell, similar to the tool yacc for C. Like yacc, it takes a file containing an annotated BNF specification of a grammar and produces a Haskell module containing a parser for the grammar. yacc Happy is flexible: you can have several Happy parsers in the same program, and each parser may have multiple entry points. Happy can work in conjunction with a lexical analyser supplied by the user (either hand-written or generated by another program), or it can parse a stream of characters directly (but this isn't practical in most cases). In a future version we hope to include a lexical analyser generator with Happy as a single package. Parsers generated by Happy are fast; generally faster than an equivalent parser written using parsing combinators or similar tools. Furthermore, any future improvements made to Happy will benefit an existing grammar, without need for a rewrite. Happy is sufficiently powerful to parse full Haskell - GHC itself uses a Happy parser. hsparser Haskell parser hsparser Happy can currently generate four types of parser from a given grammar, the intention being that we can experiment with different kinds of functional code to see which is the best, and compiler writers can use the different types of parser to tune their compilers. The types of parser supported are: standard Haskell 98 (should work with any compiler that compiles Haskell 98). standard Haskell using arrays arrays back-endsarrays (this is not the default because we have found this generates slower parsers than ). Haskell with GHC GHC back-endsGHC (Glasgow Haskell) extensions. This is a slightly faster option than for Glasgow Haskell users. GHC Haskell with string-encoded arrays. This is the fastest/smallest option for GHC users. If you're using GHC, the optimum flag settings are -agc (see ). Happy can also generate parsers which will dump debugging information at run time, showing state transitions and the input tokens to the parser. Compatibility Happy is written in Glasgow Haskell. This means that (for the time being), you need GHC to compile it. Any version of GHC >= 6.2 should work. Remember: parsers produced using Happy should compile without difficulty under any Haskell 98 compiler or interpreter.With one exception: if you have a production with a polymorphic type signature, then a compiler that supports local universal quantification is required. See . Reporting Bugs bugs, reporting Any bugs found in Happy should be reported to me: Simon Marlow marlowsd@gmail.com including all the relevant information: the compiler used to compile Happy, the command-line options used, your grammar file or preferably a cut-down example showing the problem, and a description of what goes wrong. A patch to fix the problem would also be greatly appreciated. Requests for new features should also be sent to the above address, especially if accompanied by patches :-). License License Previous versions of Happy were covered by the GNU general public license. We're now distributing Happy with a less restrictive BSD-style license. If this license doesn't work for you, please get in touch.
Copyright 2009, Simon Marlow and Andy Gill. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Obtaining <application>Happy</application> Happy's web page can be found at http://www.haskell.org/happy/. Happy source and binaries can be downloaded from there.
Using <application>Happy</application> Users of Yacc will find Happy quite familiar. The basic idea is as follows: Define the grammar you want to parse in a Happy grammar file. Run the grammar through Happy, to generate a compilable Haskell module. Use this module as part of your Haskell program, usually in conjunction with a lexical analyser (a function that splits the input into tokens, the basic unit of parsing). Let's run through an example. We'll implement a parser for a simple expression syntax, consisting of integers, variables, the operators +, -, *, /, and the form let var = exp in exp. The grammar file starts off like this: { module Main where } At the top of the file is an optional module header, module header which is just a Haskell module header enclosed in braces. This code is emitted verbatim into the generated module, so you can put any Haskell code here at all. In a grammar file, Haskell code is always contained between curly braces to distinguish it from the grammar. In this case, the parser will be a standalone program so we'll call the module Main. Next comes a couple of declarations: %name calc %tokentype { Token } %error { parseError } %name %tokentype %error The first line declares the name of the parsing function that Happy will generate, in this case calc. In many cases, this is the only symbol you need to export from the module. The second line declares the type of tokens that the parser will accept. The parser (i.e. the function calc) will be of type [Token] -> T, where T is the return type of the parser, determined by the production rules below. The %error directive tells Happy the name of a function it should call in the event of a parse error. More about this later. Now we declare all the possible tokens: %token let { TokenLet } in { TokenIn } int { TokenInt $$ } var { TokenVar $$ } '=' { TokenEq } '+' { TokenPlus } '-' { TokenMinus } '*' { TokenTimes } '/' { TokenDiv } '(' { TokenOB } ')' { TokenCB } %token The symbols on the left are the tokens as they will be referred to in the rest of the grammar, and to the right of each token enclosed in braces is a Haskell pattern that matches the token. The parser will expect to receive a stream of tokens, each of which will match one of the given patterns (the definition of the Token datatype is given later). The $$ symbol is a placeholder that represents the value of this token. Normally the value of a token is the token itself, but by using the $$ symbol you can specify some component of the token object to be the value. $$ Like yacc, we include %% here, for no real reason. %% Now we have the production rules for the grammar. Exp : let var '=' Exp in Exp { Let $2 $4 $6 } | Exp1 { Exp1 $1 } Exp1 : Exp1 '+' Term { Plus $1 $3 } | Exp1 '-' Term { Minus $1 $3 } | Term { Term $1 } Term : Term '*' Factor { Times $1 $3 } | Term '/' Factor { Div $1 $3 } | Factor { Factor $1 } Factor : int { Int $1 } | var { Var $1 } | '(' Exp ')' { Brack $2 } non-terminal Each production consists of a non-terminal symbol on the left, followed by a colon, followed by one or more expansions on the right, separated by |. Each expansion has some Haskell code associated with it, enclosed in braces as usual. The way to think about a parser is with each symbol having a value: we defined the values of the tokens above, and the grammar defines the values of non-terminal symbols in terms of sequences of other symbols (either tokens or non-terminals). In a production like this: n : t_1 ... t_n { E } whenever the parser finds the symbols t_1...t_n in the token stream, it constructs the symbol n and gives it the value E, which may refer to the values of t_1...t_n using the symbols $1...$n. The parser reduces the input using the rules in the grammar until just one symbol remains: the first symbol defined in the grammar (namely Exp in our example). The value of this symbol is the return value from the parser. To complete the program, we need some extra code. The grammar file may optionally contain a final code section, enclosed in curly braces. { All parsers must include a function to be called in the event of a parse error. In the %error directive earlier, we specified that the function to be called on a parse error is parseError: parseError :: [Token] -> a parseError _ = error "Parse error" Note that parseError must be polymorphic in its return type a, which usually means it must be a call to error. We'll see in how to wrap the parser in a monad so that we can do something more sensible with errors. It's also possible to keep track of line numbers in the parser for use in error messages, this is described in . Next we can declare the data type that represents the parsed expression: data Exp = Let String Exp Exp | Exp1 Exp1 deriving Show data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term deriving Show data Term = Times Term Factor | Div Term Factor | Factor Factor deriving Show data Factor = Int Int | Var String | Brack Exp deriving Show And the data structure for the tokens... data Token = TokenLet | TokenIn | TokenInt Int | TokenVar String | TokenEq | TokenPlus | TokenMinus | TokenTimes | TokenDiv | TokenOB | TokenCB deriving Show ... and a simple lexer that returns this data structure. lexer :: String -> [Token] lexer [] = [] lexer (c:cs) | isSpace c = lexer cs | isAlpha c = lexVar (c:cs) | isDigit c = lexNum (c:cs) lexer ('=':cs) = TokenEq : lexer cs lexer ('+':cs) = TokenPlus : lexer cs lexer ('-':cs) = TokenMinus : lexer cs lexer ('*':cs) = TokenTimes : lexer cs lexer ('/':cs) = TokenDiv : lexer cs lexer ('(':cs) = TokenOB : lexer cs lexer (')':cs) = TokenCB : lexer cs lexNum cs = TokenInt (read num) : lexer rest where (num,rest) = span isDigit cs lexVar cs = case span isAlpha cs of ("let",rest) -> TokenLet : lexer rest ("in",rest) -> TokenIn : lexer rest (var,rest) -> TokenVar var : lexer rest And finally a top-level function to take some input, parse it, and print out the result. main = getContents >>= print . calc . lexer } And that's it! A whole lexer, parser and grammar in a few dozen lines. Another good example is Happy's own parser. Several features in Happy were developed using this as an example. info file To generate the Haskell module for this parser, type the command happy example.y (where example.y is the name of the grammar file). The Haskell module will be placed in a file named example.hs. Additionally, invoking the command happy example.y -i will produce the file example.info which contains detailed information about the parser, including states and reduction rules (see ). This can be invaluable for debugging parsers, but requires some knowledge of the operation of a shift-reduce parser. Returning other datatypes In the above example, we used a data type to represent the syntax being parsed. However, there's no reason why it has to be this way: you could calculate the value of the expression on the fly, using productions like this: Term : Term '*' Factor { $1 * $3 } | Term '/' Factor { $1 / $3 } | Factor { $1 } The value of a Term would be the value of the expression itself, and the parser could return an integer. This works for simple expression types, but our grammar includes variables and the let syntax. How do we know the value of a variable while we're parsing it? We don't, but since the Haskell code for a production can be anything at all, we could make it a function that takes an environment of variable values, and returns the computed value of the expression: Exp : let var '=' Exp in Exp { \p -> $6 (($2,$4 p):p) } | Exp1 { $1 } Exp1 : Exp1 '+' Term { \p -> $1 p + $3 p } | Exp1 '-' Term { \p -> $1 p - $3 p } | Term { $1 } Term : Term '*' Factor { \p -> $1 p * $3 p } | Term '/' Factor { \p -> $1 p `div` $3 p } | Factor { $1 } Factor : int { \p -> $1 } | var { \p -> case lookup $1 p of Nothing -> error "no var" Just i -> i } | '(' Exp ')' { $2 } The value of each production is a function from an environment p to a value. When parsing a let construct, we extend the environment with the new binding to find the value of the body, and the rule for var looks up its value in the environment. There's something you can't do in yacc :-) Parsing sequences A common feature in grammars is a sequence of a particular syntactic element. In EBNF, we'd write something like n+ to represent a sequence of one or more ns, and n* for zero or more. Happy doesn't support this syntax explicitly, but you can define the equivalent sequences using simple productions. For example, the grammar for Happy itself contains a rule like this: prods : prod { [$1] } | prods prod { $2 : $1 } In other words, a sequence of productions is either a single production, or a sequence of productions followed by a single production. This recursive rule defines a sequence of one or more productions. One thing to note about this rule is that we used left recursion to define it - we could have written it like this: recursion, left vs. right prods : prod { [$1] } | prod prods { $1 : $2 } The only reason we used left recursion is that Happy is more efficient at parsing left-recursive rules; they result in a constant stack-space parser, whereas right-recursive rules require stack space proportional to the length of the list being parsed. This can be extremely important where long sequences are involved, for instance in automatically generated output. For example, the parser in GHC used to use right-recursion to parse lists, and as a result it failed to parse some Happy-generated modules due to running out of stack space! One implication of using left recursion is that the resulting list comes out reversed, and you have to reverse it again to get it in the original order. Take a look at the Happy grammar for Haskell for many examples of this. Parsing sequences of zero or more elements requires a trivial change to the above pattern: prods : {- empty -} { [] } | prods prod { $2 : $1 } Yes - empty productions are allowed. The normal convention is to include the comment {- empty -} to make it more obvious to a reader of the code what's going on. Sequences with separators A common type of sequence is one with a separator: for instance function bodies in C consist of statements separated by semicolons. To parse this kind of sequence we use a production like this: stmts : stmt { [$1] } | stmts ';' stmt { $3 : $1 } If the ; is to be a terminator rather than a separator (i.e. there should be one following each statement), we can remove the semicolon from the above rule and redefine stmt as stmt : stmt1 ';' { $1 } where stmt1 is the real definition of statements. We might like to allow extra semicolons between statements, to be a bit more liberal in what we allow as legal syntax. We probably just want the parser to ignore these extra semicolons, and not generate a ``null statement'' value or something. The following rule parses a sequence of zero or more statements separated by semicolons, in which the statements may be empty: stmts : stmts ';' stmt { $3 : $1 } | stmts ';' { $1 } | stmt { [$1] } | {- empty -} { [] } Parsing sequences of one or more possibly null statements is left as an exercise for the reader... Using Precedences precedences associativity Going back to our earlier expression-parsing example, wouldn't it be nicer if we didn't have to explicitly separate the expressions into terms and factors, merely to make it clear that '*' and '/' operators bind more tightly than '+' and '-'? We could just change the grammar as follows (making the appropriate changes to the expression datatype too): Exp : let var '=' Exp in Exp { Let $2 $4 $6 } | Exp '+' Exp { Plus $1 $3 } | Exp '-' Exp { Minus $1 $3 } | Exp '*' Exp { Times $1 $3 } | Exp '/' Exp { Div $1 $3 } | '(' Exp ')' { Brack $2 } | int { Int $1 } | var { Var $1 } but now Happy will complain that there are shift/reduce conflicts because the grammar is ambiguous - we haven't specified whether e.g. 1 + 2 * 3 is to be parsed as 1 + (2 * 3) or (1 + 2) * 3. Happy allows these ambiguities to be resolved by specifying the precedences of the operators involved using directives in the headerUsers of yacc will find this familiar, Happy's precedence scheme works in exactly the same way.: ... %right in %left '+' '-' %left '*' '/' %% ... %left directive %right directive %nonassoc directive The %left or %right directive is followed by a list of terminals, and declares all these tokens to be left or right-associative respectively. The precedence of these tokens with respect to other tokens is established by the order of the %left and %right directives: earlier means lower precedence. A higher precedence causes an operator to bind more tightly; in our example above, because '*' has a higher precedence than '+', the expression 1 + 2 * 3 will parse as 1 + (2 * 3). What happens when two operators have the same precedence? This is when the associativity comes into play. Operators specified as left associative will cause expressions like 1 + 2 - 3 to parse as (1 + 2) - 3, whereas right-associative operators would parse as 1 + (2 - 3). There is also a %nonassoc directive which indicates that the specified operators may not be used together. For example, if we add the comparison operators '>' and '<' to our grammar, then we would probably give their precedence as: ... %right in %nonassoc '>' '<' %left '+' '-' %left '*' '/' %% ... which indicates that '>' and '<' bind less tightly than the other operators, and the non-associativity causes expressions such as 1 > 2 > 3 to be disallowed. How precedence works The precedence directives, %left, %right and %nonassoc, assign precedence levels to the tokens in the declaration. A rule in the grammar may also have a precedence: if the last terminal in the right hand side of the rule has a precedence, then this is the precedence of the whole rule. The precedences are used to resolve ambiguities in the grammar. If there is a shift/reduce conflict, then the precedence of the rule and the lookahead token are examined in order to resolve the conflict: If the precedence of the rule is higher, then the conflict is resolved as a reduce. If the precedence of the lookahead token is higher, then the conflict is resolved as a shift. If the precedences are equal, then If the token is left-associative, then reduce If the token is right-associative, then shift If the token is non-associative, then fail If either the rule or the token has no precedence, then the default is to shift (these conflicts are reported by Happy, whereas ones that are automatically resolved by the precedence rules are not). Context-dependent Precedence The precedence of an individual rule can be overriden, using context precedence. This is useful when, for example, a particular token has a different precedence depending on the context. A common example is the minus sign: it has high precedence when used as prefix negation, but a lower precedence when used as binary subtraction. We can implement this in Happy as follows: %right in %nonassoc '>' '<' %left '+' '-' %left '*' '/' %left NEG %% Exp : let var '=' Exp in Exp { Let $2 $4 $6 } | Exp '+' Exp { Plus $1 $3 } | Exp '-' Exp { Minus $1 $3 } | Exp '*' Exp { Times $1 $3 } | Exp '/' Exp { Div $1 $3 } | '(' Exp ')' { Brack $2 } | '-' Exp %prec NEG { Negate $2 } | int { Int $1 } | var { Var $1 } %prec directive We invent a new token NEG as a placeholder for the precedence of our prefix negation rule. The NEG token doesn't need to appear in a %token directive. The prefix negation rule has a %prec NEG directive attached, which overrides the default precedence for the rule (which would normally be the precedence of '-') with the precedence of NEG. The %shift directive for lowest precedence rules Rules annotated with the %shift directive have the lowest possible precedence and are non-associative. A shift/reduce conflict that involves such a rule is resolved as a shift. One can think of %shift as %prec SHIFT such that SHIFT has lower precedence than any other token. This is useful in conjunction with %expect 0 to explicitly point out all rules in the grammar that result in conflicts, and thereby resolve such conflicts. Type Signatures type signatures in grammar Happy allows you to include type signatures in the grammar file itself, to indicate the type of each production. This has several benefits: Documentation: including types in the grammar helps to document the grammar for someone else (and indeed yourself) reading the code. Fixing type errors in the generated module can become slightly easier if Happy has inserted type signatures for you. This is a slightly dubious benefit, since type errors in the generated module are still somewhat difficult to find. Type signatures generally help the Haskell compiler to compile the parser faster. This is important when really large grammar files are being used. The syntax for type signatures in the grammar file is as follows: stmts :: { [ Stmt ] } stmts : stmts stmt { $2 : $1 } | stmt { [$1] } In fact, you can leave out the superfluous occurrence of stmts: stmts :: { [ Stmt ] } : stmts stmt { $2 : $1 } | stmt { [$1] } Note that currently, you have to include type signatures for all the productions in the grammar to benefit from the second and third points above. This is due to boring technical reasons, but it is hoped that this restriction can be removed in the future. It is possible to have productions with polymorphic or overloaded types. However, because the type of each production becomes the argument type of a constructor in an algebraic datatype in the generated source file, compiling the generated file requires a compiler that supports local universal quantification. GHC (with the option) and Hugs are known to support this. Monadic Parsers monadic parsers Happy has support for threading a monad through the generated parser. This might be useful for several reasons: Handling parse errors parse errors handling by using an exception monad (see ). Keeping track of line numbers line numbers in the input file, for example for use in error messages (see ). Performing IO operations during parsing. Parsing languages with context-dependencies (such as C) require some state in the parser. Adding monadic support to your parser couldn't be simpler. Just add the following directive to the declaration section of the grammar file: %monad { <type> } [ { <then> } { <return> } ] %monad where <type> is the type constructor for the monad, <then> is the bind operation of the monad, and <return> is the return operation. If you leave out the names for the bind and return operations, Happy assumes that <type> is an instance of the standard Haskell type class Monad and uses the overloaded names for the bind and return operations. When this declaration is included in the grammar, Happy makes a couple of changes to the generated parser: the types of the main parser function and parseError (the function named in %error) become [Token] -> P a where P is the monad type constructor, and the function must be polymorphic in a. In other words, Happy adds an application of the <return> operation defined in the declaration above, around the result of the parser (parseError is affected because it must have the same return type as the parser). And that's all it does. This still isn't very useful: all you can do is return something of monadic type from parseError. How do you specify that the productions can also have type P a? Most of the time, you don't want a production to have this type: you'd have to write explicit returnPs everywhere. However, there may be a few rules in a grammar that need to get at the monad, so Happy has a special syntax for monadic actions: n : t_1 ... t_n {% <expr> } monadic actions The % in the action indicates that this is a monadic action, with type P a, where a is the real return type of the production. When Happy reduces one of these rules, it evaluates the expression <expr> `then` \result -> <continue parsing> Happy uses result as the real semantic value of the production. During parsing, several monadic actions might be reduced, resulting in a sequence like <expr1> `then` \r1 -> <expr2> `then` \r2 -> ... return <expr3> The monadic actions are performed in the order that they are reduced. If we consider the parse as a tree, then reductions happen in a depth-first left-to-right manner. The great thing about adding a monad to your parser is that it doesn't impose any performance overhead for normal reductions - only the monadic ones are translated like this. Take a look at the Haskell parser for a good illustration of how to use a monad in your parser: it contains examples of all the principles discussed in this section, namely parse errors, a threaded lexer, line/column numbers, and state communication between the parser and lexer. The following sections consider a couple of uses for monadic parsers, and describe how to also thread the monad through the lexical analyser. Handling Parse Errors parse errors handling It's not very convenient to just call error when a parse error is detected: in a robust setting, you'd like the program to recover gracefully and report a useful error message to the user. Exceptions (of which errors are a special case) are normally implemented in Haskell by using an exception monad, something like: data E a = Ok a | Failed String thenE :: E a -> (a -> E b) -> E b m `thenE` k = case m of Ok a -> k a Failed e -> Failed e returnE :: a -> E a returnE a = Ok a failE :: String -> E a failE err = Failed err catchE :: E a -> (String -> E a) -> E a catchE m k = case m of Ok a -> Ok a Failed e -> k e This monad just uses a string as the error type. The functions thenE and returnE are the usual bind and return operations of the monad, failE raises an error, and catchE is a combinator for handling exceptions. We can add this monad to the parser with the declaration %monad { E } { thenE } { returnE } Now, without changing the grammar, we can change the definition of parseError and have something sensible happen for a parse error: parseError tokens = failE "Parse error" The parser now raises an exception in the monad instead of bombing out on a parse error. We can also generate errors during parsing. There are times when it is more convenient to parse a more general language than that which is actually intended, and check it later. An example comes from Haskell, where the precedence values in infix declarations must be between 0 and 9: prec :: { Int } : int {% if $1 < 0 || $1 > 9 then failE "Precedence out of range" else returnE $1 } The monadic action allows the check to be placed in the parser itself, where it belongs. Threaded Lexers lexer, threaded monadic lexer Happy allows the monad concept to be extended to the lexical analyser, too. This has several useful consequences: Lexical errors can be treated in the same way as parse errors, using an exception monad. parse errors lexical Information such as the current file and line number can be communicated between the lexer and parser. General state communication between the parser and lexer - for example, implementation of the Haskell layout rule requires this kind of interaction. IO operations can be performed in the lexer - this could be useful for following import/include declarations for instance. A monadic lexer is requested by adding the following declaration to the grammar file: %lexer { <lexer> } { <eof> } %lexer where <lexer> is the name of the lexical analyser function, and <eof> is a token that is to be treated as the end of file. When using a monadic lexer, the parser no longer reads a list of tokens. Instead, it calls the lexical analysis function for each new token to be read. This has the side effect of eliminating the intermediate list of tokens, which is a slight performance win. The type of the main parser function is now just P a - the input is being handled completely within the monad. The type of parseError becomes Token -> P a; that is it takes Happy's current lookahead token as input. This can be useful, because the error function probably wants to report the token at which the parse error occurred, and otherwise the lexer would have to store this token in the monad. The lexical analysis function must have the following type: lexer :: (Token -> P a) -> P a where P is the monad type constructor declared with %monad, and a can be replaced by the parser return type if desired. You can see from this type that the lexer takes a continuation as an argument. The lexer is to find the next token, and pass it to this continuation to carry on with the parse. Obviously, we need to keep track of the input in the monad somehow, so that the lexer can do something different each time it's called! Let's take the exception monad above, and extend it to add the input string so that we can use it with a threaded lexer. data ParseResult a = Ok a | Failed String type P a = String -> ParseResult a thenP :: P a -> (a -> P b) -> P b m `thenP` k = \s -> case m s of Ok a -> k a s Failed e -> Failed e returnP :: a -> P a returnP a = \s -> Ok a failP :: String -> P a failP err = \s -> Failed err catchP :: P a -> (String -> P a) -> P a catchP m k = \s -> case m s of Ok a -> Ok a Failed e -> k e s Notice that this isn't a real state monad - the input string just gets passed around, not returned. Our lexer will now look something like this: lexer :: (Token -> P a) -> P a lexer cont s = ... lexical analysis code ... cont token s' the lexer grabs the continuation and the input string, finds the next token token, and passes it together with the remaining input string s' to the continuation. We can now indicate lexical errors by ignoring the continuation and calling failP "error message" s within the lexer (don't forget to pass the input string to make the types work out). This may all seem a bit weird. Why, you ask, doesn't the lexer just have type P Token? It was done this way for performance reasons - this formulation sometimes means that you can use a reader monad instead of a state monad for P, and the reader monad might be faster. It's not at all clear that this reasoning still holds (or indeed ever held), and it's entirely possible that the use of a continuation here is just a misfeature. If you want a lexer of type P Token, then just define a wrapper to deal with the continuation: lexwrap :: (Token -> P a) -> P a lexwrap cont = real_lexer `thenP` \token -> cont token Monadic productions with %lexer The {% ... } actions work fine with %lexer, but additionally there are two more forms which are useful in certain cases. Firstly: n : t_1 ... t_n {%^ <expr> } In this case, <expr> has type Token -> P a. That is, Happy passes the current lookahead token to the monadic action <expr>. This is a useful way to get hold of Happy's current lookahead token without having to store it in the monad. n : t_1 ... t_n {%% <expr> } This is a slight variant on the previous form. The type of <expr> is the same, but in this case the lookahead token is actually discarded and a new token is read from the input. This can be useful when you want to change the next token and continue parsing. Line Numbers line numbers %newline Previous versions of Happy had a %newline directive that enabled simple line numbers to be counted by the parser and referenced in the actions. We warned you that this facility may go away and be replaced by something more general, well guess what? :-) Line numbers can now be dealt with quite straightforwardly using a monadic parser/lexer combination. Ok, we have to extend the monad a bit more: type LineNumber = Int type P a = String -> LineNumber -> ParseResult a getLineNo :: P LineNumber getLineNo = \s l -> Ok l (the rest of the functions in the monad follow by just adding the extra line number argument in the same way as the input string). Again, the line number is just passed down, not returned: this is OK because of the continuation-based lexer that can change the line number and pass the new one to the continuation. The lexer can now update the line number as follows: lexer cont s = case s of '\n':s -> \line -> lexer cont s (line + 1) ... rest of lexical analysis ... It's as simple as that. Take a look at Happy's own parser if you have the sources lying around, it uses a monad just like the one above. Reporting the line number of a parse error is achieved by changing parseError to look something like this: parseError :: Token -> P a parseError = getLineNo `thenP` \line -> failP (show line ++ ": parse error") We can also get hold of the line number during parsing, to put it in the parsed data structure for future reference. A good way to do this is to have a production in the grammar that returns the current line number: lineno :: { LineNumber } : {- empty -} {% getLineNo } The semantic value of lineno is the line number of the last token read - this will always be the token directly following the lineno symbol in the grammar, since Happy always keeps one lookahead token in reserve. Summary The types of various functions related to the parser are dependent on what combination of %monad and %lexer directives are present in the grammar. For reference, we list those types here. In the following types, t is the return type of the parser. A type containing a type variable indicates that the specified function must be polymorphic. type of parseError type of parser type of lexer No <literal>%monad</literal> or <literal>%lexer</literal> parse :: [Token] -> t parseError :: [Token] -> a with <literal>%monad</literal> parse :: [Token] -> P t parseError :: [Token] -> P a with <literal>%lexer</literal> parse :: T t parseError :: Token -> T a lexer :: (Token -> T a) -> T a where the type constructor T is whatever you want (usually T a = String -> a). I'm not sure if this is useful, or even if it works properly. with <literal>%monad</literal> and <literal>%lexer</literal> parse :: P t parseError :: Token -> P a lexer :: (Token -> P a) -> P a The Error Token error token Happy supports a limited form of error recovery, using the special symbol error in a grammar file. When Happy finds a parse error during parsing, it automatically inserts the error symbol; if your grammar deals with error explicitly, then it can detect the error and carry on. For example, the Happy grammar for Haskell uses error recovery to implement Haskell layout. The grammar has a rule that looks like this: close : '}' { () } | error { () } This says that a close brace in a layout-indented context may be either a curly brace (inserted by the lexical analyser), or a parse error. This rule is used to parse expressions like let x = e in e': the layout system inserts an open brace before x, and the occurrence of the in symbol generates a parse error, which is interpreted as a close brace by the above rule. yacc Note for yacc users: this form of error recovery is strictly more limited than that provided by yacc. During a parse error condition, yacc attempts to discard states and tokens in order to get back into a state where parsing may continue; Happy doesn't do this. The reason is that normal yacc error recovery is notoriously hard to describe, and the semantics depend heavily on the workings of a shift-reduce parser. Furthermore, different implementations of yacc appear to implement error recovery differently. Happy's limited error recovery on the other hand is well-defined, as is just sufficient to implement the Haskell layout rule (which is why it was added in the first place). Generating Multiple Parsers From a Single Grammar multiple parsers It is often useful to use a single grammar to describe multiple parsers, where each parser has a different top-level non-terminal, but parts of the grammar are shared between parsers. A classic example of this is an interpreter, which needs to be able to parse both entire files and single expressions: the expression grammar is likely to be identical for the two parsers, so we would like to use a single grammar but have two entry points. Happy lets you do this by allowing multiple %name directives in the grammar file. The %name directive takes an optional second parameter specifying the top-level non-terminal for this parser, so we may specify multiple parsers like so: %name directive %name parse1 non-terminal1 %name parse2 non-terminal2 Happy will generate from this a module which defines two functions parse1 and parse2, which parse the grammars given by non-terminal1 and non-terminal2 respectively. Each parsing function will of course have a different type, depending on the type of the appropriate non-terminal. 2004 University of Durham, Paul Callaghan, Ben Medlock Generalized LR Parsing This chapter explains how to use the GLR parsing extension, which allows Happy to parse ambiguous grammars and produce useful results. This extension is triggered with the flag, which causes Happy to use a different driver for the LALR(1) parsing tables. The result of parsing is a structure which encodes compactly all of the possible parses. There are two options for how semantic information is combined with the structural information. This extension was developed by Paul Callaghan and Ben Medlock (University of Durham). It is based on the structural parser implemented in Medlock's undergraduate project, but significantly extended and improved by Callaghan. Bug reports, comments, questions etc should be sent to P.C.Callaghan@durham.ac.uk. Further information can be found on Callaghan's GLR parser page. Introduction Here's an ambiguous grammar. It has no information about the associativity of +, so for example, 1+2+3 can be parsed as (1+(2+3)) or ((1+2)+3). In conventional mode, Happy, would complain about a shift/reduce conflict, although it would generate a parser which always shifts in such a conflict, and hence would produce only the first alternative above. E -> E + E E -> i -- any integer GLR parsing will accept this grammar without complaint, and produce a result which encodes both alternatives simultaneously. Now consider the more interesting example of 1+2+3+4, which has five distinct parses -- try to list them! You will see that some of the subtrees are identical. A further property of the GLR output is that such sub-results are shared, hence efficiently represented: there is no combinatorial explosion. Below is the simplified output of the GLR parser for this example. Root (0,7,G_E) (0,1,G_E) => [[(0,1,Tok '1'))]] (0,3,G_E) => [[(0,1,G_E),(1,2,Tok '+'),(2,3,G_E)]] (0,5,G_E) => [[(0,1,G_E),(1,2,Tok '+'),(2,5,G_E)] ,[(0,3,G_E),(3,4,Tok '+'),(4,5,G_E)]] (0,7,G_E) => [[(0,3,G_E),(3,4,Tok '+'),(4,7,G_E)] ,[(0,1,G_E),(1,2,Tok '+'),(2,7,G_E)] ,[(0,5,G_E),(5,6,Tok '+'),(6,7,G_E)]}] (2,3,G_E) => [[(2,3,Tok '2'))]}] (2,5,G_E) => [[(2,3,G_E),(3,4,Tok '+'),(4,5,G_E)]}] (2,7,G_E) => [[(2,3,G_E),(3,4,Tok '+'),(4,7,G_E)]} ,[(2,5,G_E),(5,6,Tok '+'),(6,7,G_E)]}] (4,5,G_E) => [[(4,5,Tok '3'))]}] (4,7,G_E) => [[(4,5,G_E),(5,6,Tok '+'),(6,7,G_E)]}] (6,7,G_E) => [[(6,7,Tok '4'))]}] This is a directed, acyclic and-or graph. The node "names" are of form (a,b,c) where a and b are the start and end points (as positions in the input string) and c is a category (or name of grammar rule). For example (2,7,G_E) spans positions 2 to 7 and contains analyses which match the E grammar rule. Such analyses are given as a list of alternatives (disjunctions), each corresponding to some use of a production of that category, which in turn are a conjunction of sub-analyses, each represented as a node in the graph or an instance of a token. Hence (2,7,G_E) contains two alternatives, one which has (2,3,G_E) as its first child and the other with (2,5,G_E) as its first child, respectively corresponding to sub-analyses (2+(3+4)) and ((2+3)+4). Both alternatives have the token + as their second child, but note that they are difference occurrences of + in the input! We strongly recommend looking at such results in graphical form to understand these points. If you build the expr-eval example in the directory examples/glr (NB you need to use GHC for this, unless you know how to use the flag for Hugs), running the example will produce a file which can be viewed with the daVinci graph visualization tool. (See for more information. Educational use licenses are currently available without charge.) The GLR extension also allows semantic information to be attached to productions, as in conventional Happy, although there are further issues to consider. Two modes are provided, one for simple applications and one for more complex use. See . The extension is also integrated with Happy's token handling, e.g. extraction of information from tokens. One key feature of this implementation in Haskell is that its main result is a graph. Other implementations effectively produce a list of trees, but this limits practical use to small examples. For large and interesting applications, some of which are discussed in , a graph is essential due to the large number of possibilities and the need to analyse the structure of the ambiguity. Converting the graph to trees could produce huge numbers of results and will lose information about sharing etc. One final comment. You may have learnt through using yacc-style tools that ambiguous grammars are to be avoided, and that ambiguity is something that appears only in Natural Language processing. This is definitely not true. Many interesting grammars are ambiguous, and with GLR tools they can be used effectively. We hope you enjoy exploring this fascinating area! Basic use of a Happy-generated GLR parser This section explains how to generate and to use a GLR parser to produce structural results. Please check the examples for further information. Discussion of semantic issues comes later; see . Overview The process of generating a GLR parser is broadly the same as for standard Happy. You write a grammar specification, run Happy on this to generate some Haskell code, then compile and link this into your program. An alternative to using Happy directly is to use the BNF Converter tool by Markus Forsberg, Peter Gammie, Michael Pellauer and Aarne Ranta. This tool creates an abstract syntax, grammar, pretty-printer and other useful items from a single grammar formalism, thus it saves a lot of work and improves maintainability. The current output of BNFC can be used with GLR mode now with just a few small changes, but from January 2005 we expect to have a fully-compatible version of BNFC. Most of the features of Happy still work, but note the important points below. module header The GLR parser is generated in TWO files, one for data and one for the driver. This is because the driver code needs to be optimized, but for large parsers with lots of data, optimizing the data tables too causes compilation to be too slow. Given a file Foo.y, the file FooData.hs, containing the data module, is generated with basic type information, the parser tables, and the header and tail code that was included in the parser specification. Note that Happy can automatically generate the necessary module declaration statements, if you do not choose to provide one in the grammar file. But, if you do choose to provide the module declaration statement, then the name of the module will be parsed and used as the name of the driver module. The parsed name will also be used to form the name of the data module, but with the string Data appended to it. The driver module, which is to be found in the file Foo.hs, will not contain any other user-supplied text besides the module name. Do not bother to supply any export declarations in your module declaration statement: they will be ignored and dropped, in favor of the standard export declaration. export of lexer You can declare a lexer (and error token) with the %lexer directive as normal, but the generated parser does NOT call this lexer automatically. The action of the directive is only to export the lexer function to the top level. This is because some applications need finer control of the lexing process. precedence information This still works, but note the reasons. The precedence and associativity declarations are used in Happy's LR table creation to resolve certain conflicts. It does this by retaining the actions implied by the declarations and removing the ones which clash with these. The GLR parser back-end then produces code from these filtered tables, hence the rejected actions are never considered by the GLR parser. Hence, declaring precedence and associativity is still a good thing, since it avoids a certain amount of ambiguity that the user knows how to remove. monad directive There is some support for monadic parsers. The "tree decoding" mode (see ) can use the information given in the %monad declaration to monadify the decoding process. This is explained in more detail in . Note: the generated parsers don't include Ashley Yakeley's monad context information yet. It is currently just ignored. If this is a problem, email and I'll make the changes required. parser name directive This has no effect at present. It will probably remain this way: if you want to control names, you could use qualified import. type information on non-terminals The generation of semantic code relies on type information given in the grammar specification. If you don't give an explicit signature, the type () is assumed. If you get type clashes mentioning () you may need to add type annotations. Similarly, if you don't supply code for the semantic rule portion, then the value () is used. error symbol in grammars, and recovery No attempt to implement this yet. Any use of error in grammars is thus ignored, and parse errors will eventually mean a parse will fail. the token type The type used for tokens must be in the Ord type class (and hence in Eq), plus it is recommended that they are in the Show class too. The ordering is required for the implementation of ambiguity packing. It may be possible to relax this requirement, but it is probably simpler just to require instances of the type classes. Please tell us if this is a problem. The main function The driver file exports a function doParse :: [[UserDefTok]] -> GLRResult. If you are using several parsers, use qualified naming to distinguish them. UserDefTok is a synonym for the type declared with the %tokentype directive. The input The input to doParse is a list of list of token values. The outer level represents the sequence of input symbols, and the inner list represents ambiguity in the tokenisation of each input symbol. For example, the word "run" can be at least a noun or a verb, hence the inner list will contain at least two values. If your tokens are not ambiguous, you will need to convert each token to a singleton list before parsing. The Parse Result The parse result is expressed with the following types. A successful parse yields a forest (explained below) and a single root node for the forest. A parse may fail for one of two reasons: running out of input or a (global) parse error. A global parse error means that it was not possible to continue parsing any of the live alternatives; this is different from a local error, which simply means that the current alternative dies and we try some other alternative. In both error cases, the forest at failure point is returned, since it may contain useful information. Unconsumed tokens are returned when there is a global parse error. type ForestId = (Int,Int,GSymbol) data GSymbol = <... automatically generated ...> type Forest = FiniteMap ForestId [Branch] type RootNode = ForestId type Tokens = [[(Int, GSymbol)]] data Branch = Branch {b_sem :: GSem, b_nodes :: [ForestId]} data GSem = <... automatically generated ...> data GLRResult = ParseOK RootNode Forest -- forest with root | ParseError Tokens Forest -- partial forest with bad input | ParseEOF Forest -- partial forest (missing input) Conceptually, the parse forest is a directed, acyclic and-or graph. It is represented by a mapping of ForestIds to lists of possible analyses. The FiniteMap type is used to provide efficient and convenient access. The ForestId type identifies nodes in the graph, named by the range of input they span and the category of analysis they license. GSymbol is generated automatically as a union of the names of grammar rules (prefixed by G_ to avoid name clashes) and of tokens and an EOF symbol. Tokens are wrapped in the constructor HappyTok :: UserDefTok -> GSymbol. The Branch type represents a match for some right-hand side of a production, containing semantic information (see below) and a list of sub-analyses. Each of these is a node in the graph. Note that tokens are represented as childless nodes that span one input position. Empty productions will appear as childless nodes that start and end at the same position. Compiling the parser Happy will generate two files, and these should be compiled as normal Haskell files. If speed is an issue, then you should use the flags etc with the driver code, and if feasible, with the parser tables too. You can also use the flag to trigger certain GHC-specific optimizations. At present, this just causes use of unboxed types in the tables and in some key code. Using this flag causes relevant GHC option pragmas to be inserted into the generated code, so you shouldn't have to use any strange flags (unless you want to...). Including semantic results This section discusses the options for including semantic information in grammars. Forms of semantics Semantic information may be attached to productions in the conventional way, but when more than one analysis is possible, the use of the semantic information must change. Two schemes have been implemented, which we call tree decoding and label decoding. The former is for simple applications, where there is not much ambiguity and hence where the effective unpacking of the parse forest isn't a factor. This mode is quite similar to the standard mode in Happy. The latter is for serious applications, where sharing is important and where processing of the forest (eg filtering) is needed. Here, the emphasis is about providing rich labels in nodes of the the parse forest, to support such processing. The default mode is labelling. If you want the tree decode mode, use the flag. Tree decoding Tree decoding corresponds to unpacking the parse forest to individual trees and collecting the list of semantic results computed from each of these. It is a mode intended for simple applications, where there is limited ambiguity. You may access semantic results from components of a reduction using the dollar variables. As a working example, the following is taken from the expr-tree grammar in the examples. Note that the type signature is required, else the types in use can't be determined by the parser generator. E :: {Int} -- type signature needed : E '+' E { $1 + $3 } | E '*' E { $1 * $3 } | i { $1 } This mode works by converting each of the semantic rules into functions (abstracted over the dollar variables mentioned), and labelling each Branch created from a reduction of that rule with the function value. This amounts to delaying the action of the rule, since we must wait until we know the results of all of the sub-analyses before computing any of the results. (Certain cases of packing can add new analyses at a later stage.) At the end of parsing, the functions are applied across relevant sub-analyses via a recursive descent. The main interface to this is via the class and entry function below. Typically, decode should be called on the root of the forest, also supplying a function which maps node names to their list of analyses (typically a partial application of lookup in the forest value). The result is a list of semantic values. Note that the context of the call to decode should (eventually) supply a concrete type to allow selection of appropriate instance. Ie, you have to indicate in some way what type the semantic result should have. Decode_Result a is a synonym generated by Happy: for non-monadic semantics, it is equivalent to a; when monads are in use, it becomes the declared monad type. See the full expr-eval example for more information. class TreeDecode a where decode_b :: (ForestId -> [Branch]) -> Branch -> [Decode_Result a] decode :: TreeDecode a => (ForestId -> [Branch]) -> ForestId -> [Decode_Result a] The GLR parser generator identifies the types involved in each semantic rule, hence the types of the functions, then creates a union containing distinct types. Values of this union are stored in the branches. (The union is actually a bit more complex: it must also distinguish patterns of dollar-variable usage, eg a function \x y -> x + y could be applied to the first and second constituents, or to the first and third.) The parser generator also creates instances of the TreeDecode class, which unpacks the semantic function and applies it across the decodings of the possible combinations of children. Effectively, it does a cartesian product operation across the lists of semantic results from each of the children. Eg [1,2] "+" [3,4] produces [4,5,5,6]. Information is extracted from token values using the patterns supplied by the user when declaring tokens and their Haskell representation, so the dollar-dollar convention works also. The decoding process could be made more efficient by using memoisation techniques, but this hasn't been implemented since we believe the other (label) decoding mode is more useful. (If someone sends in a patch, we may include it in a future release -- but this might be tricky, eg require higher-order polymorphism? Plus, are there other ways of using this form of semantic function?) Label decoding The labelling mode aims to label branches in the forest with information that supports subsequent processing, for example the filtering and prioritisation of analyses prior to extraction of favoured solutions. As above, code fragments are given in braces and can contain dollar-variables. But these variables are expanded to node names in the graph, with the intention of easing navigation. The following grammar is from the expr-tree example. E :: {Tree ForestId Int} : E '+' E { Plus $1 $3 } | E '*' E { Times $1 $3 } | i { Const $1 } Here, the semantic values provide more meaningful labels than the plain structural information. In particular, only the interesting parts of the branch are represented, and the programmer can clearly select or label the useful constituents if required. There is no need to remember that it is the first and third child in the branch which we need to extract, because the label only contains those values (the `noise' has been dropped). Consider also the difference between concrete and abstract syntax. The labels are oriented towards abstract syntax. Tokens are handled slightly differently here: when they appear as children in a reduction, their informational content can be extracted directly, hence the Const value above will be built with the Int value from the token, not some ForestId. Note the useful technique of making the label types polymorphic in the position used for forest indices. This allows replacement at a later stage with more appropriate values, eg. inserting lists of actual subtrees from the final decoding. Use of these labels is supported by a type class LabelDecode, which unpacks values of the automatically-generated union type GSem to the original type(s). The parser generator will create appropriate instances of this class, based on the type information in the grammar file. (Note that omitting type information leads to a default of ().) Observe that use of the labels is often like traversing an abstract syntax, and the structure of the abstract syntax type usually constrains the types of constituents; so once the overall type is fixed (eg. with a type cast or signature) then there are no problems with resolution of class instances. class LabelDecode a where unpack :: GSem -> a Internally, the semantic values are packed in a union type as before, but there is no direct abstraction step. Instead, the ForestId values (from the dollar-variables) are bound when the corresponding branch is created from the list of constituent nodes. At this stage, token information is also extracted, using the patterns supplied by the user when declaring the tokens. Monadic tree decoding You can use the %monad directive in the tree-decode mode. Essentially, the decoding process now creates a list of monadic values, using the monad type declared in the directive. The default handling of the semantic functions is to apply the relevant return function to the value being returned. You can over-ride this using the {% ... } convention. The declared (>>=) function is used to assemble the computations. Note that no attempt is made to share the results of monadic computations from sub-trees. (You could possibly do this by supplying a memoising lookup function for the decoding process.) Hence, the usual behaviour is that decoding produces whole monadic computations, each part of which is computed afresh (in depth-first order) when the whole is computed. Hence you should take care to initialise any relevant state before computing the results from multiple solutions. This facility is experimental, and we welcome comments or observations on the approach taken! An example is provided (examples/glr/expr-monad). It is the standard example of arithmetic expressions, except that the IO monad is used, and a user exception is thrown when the second argument to addition is an odd number. Running this example will show a zero (from the exception handler) instead of the expected number amongst the results from the other parses. Further information Other useful information... The GLR examples The directory examples/glr contains several examples from the small to the large. Please consult these or use them as a base for your experiments. Viewing forests as graphs If you run the examples with GHC, each run will produce a file out.daVinci. This is a graph in the format expected by the daVinci graph visualization tool. (See for more information. Educational use licenses are currently available without charge.) We highly recommend looking at graphs of parse results - it really helps to understand the results. The graphs files are created with Sven Panne's library for communicating with daVinci, supplemented with some extensions due to Callaghan. Copies of this code are included in the examples directory, for convenience. If you are trying to view large and complex graphs, contact Paul Callaghan (there are tools and techniques to make the graphs more manageable). Some Applications of GLR parsing GLR parsing (and related techniques) aren't just for badly written grammars or for things like natural language (NL) where ambiguity is inescapable. There are applications where ambiguity can represent possible alternatives in pattern-matching tasks, and the flexibility of these parsing techniques and the resulting graphs support deep analyses. Below, we briefly discuss some examples, a mixture from our recent work and from the literature. Gene sequence analysis Combinations of structures within gene sequences can be expressed as a grammar, for example a "start" combination followed by a "promoter" combination then the gene proper. A recent undergraduate project has used this GLR implementation to detect candiate matches in data, and then to filter these matches with a mixture of local and global information. Rhythmic structure in poetry Rhythmic patterns in (English) poetry obey certain rules, and in more modern poetry can break rules in particular ways to achieve certain effects. The standard rhythmic patterns (eg. iambic pentameter) can be encoded as a grammar, and deviations from the patterns also encoded as rules. The neutral reading can be parsed with this grammar, to give a forest of alternative matches. The forest can be analysed to give a preferred reading, and to highlight certain technical features of the poetry. An undergraduate project in Durham has used this implementation for this purpose, with promising results. Compilers -- instruction selection Recent work has phrased the translation problem in compilers from intermediate representation to an instruction set for a given processor as a matching problem. Different constructs at the intermediate level can map to several combinations of machine instructions. This knowledge can be expressed as a grammar, and instances of the problem solved by parsing. The parse forest represents competing solutions, and allows selection of optimum solutions according to various measures. Robust parsing of ill-formed input The extra flexibility of GLR parsing can simplify parsing of formal languages where a degree of `informality' is allowed. For example, Html parsing. Modern browsers contain complex parsers which are designed to try to extract useful information from Html text which doesn't follow the rules precisely, eg missing start tags or missing end tags. Html with missing tags can be written as an ambiguous grammar, and it should be a simple matter to extract a usable interpretation from a forest of parses. Notice the technique: we widen the scope of the grammar, parse with GLR, then extract a reasonable solution. This is arguably simpler than pushing an LR(1) or LL(1) parser past its limits, and also more maintainable. Natural Language Processing Ambiguity is inescapable in the syntax of most human languages. In realistic systems, parse forests are useful to encode competing analyses in an efficient way, and they also provide a framework for further analysis and disambiguation. Note that ambiguity can have many forms, from simple phrase attachment uncertainty to more subtle forms involving mixtures of word senses. If some degree of ungrammaticality is to be tolerated in a system, which can be done by extending the grammar with productions incorporating common forms of infelicity, the degree of ambiguity increases further. For systems used on arbitrary text, such as on newspapers, it is not uncommon that many sentences permit several hundred or more analyses. With such grammars, parse forest techniques are essential. Many recent NLP systems use such techniques, including the Durham's earlier LOLITA system - which was mostly written in Haskell. Technical details The original implementation was developed by Ben Medlock, as his undergraduate final year project, using ideas from Peter Ljungloef's Licentiate thesis (see , and we recommend the thesis for its clear analysis of parsing algorithms). Ljungloef's version produces lists of parse trees, but Medlock adapted this to produce an explicit graph containing parse structure information. He also incorporated the code into Happy. After Medlock's graduation, Callaghan extended the code to incorporate semantic information, and made several improvements to the original code, such as improved local packing and support for hidden left recursion. The performance of the code was significantly improved, after changes of representation (eg to a chart-style data structure) and technique. Medlock's code was also used in several student projects, including analysis of gene sequences (Fischer) and analysis of rhythmic patterns in poetry (Henderson). The current code implements the standard GLR algorithm extended to handle hidden left recursion. Such recursion, as in the grammar below from Rekers [1992], causes the standard algorithm to loop because the empty reduction A -> is always possible and the LR parser will not change state. Alternatively, there is a problem because an unknown (at the start of parsing) number of A items are required, to match the number of i tokens in the input. S -> A Q i | + A -> The solution to this is not surprising. Problematic recursions are detected as zero-span reductions in a state which has a goto table entry looping to itself. A special symbol is pushed to the stack on the first such reduction, and such reductions are done at most once for any token alternative for any input position. When popping from the stack, if the last token being popped is such a special symbol, then two stack tails are returned: one corresponding to a conventional pop (which removes the symbol) and the other to a duplication of the special symbol (the stack is not changed, but a copy of the symbol is returned). This allows sufficient copies of the empty symbol to appear on some stack, hence allowing the parse to complete. The forest is held in a chart-style data structure, and this supports local ambiguity packing (chart parsing is discussed in Ljungloef's thesis, among other places). A limited amount of packing of live stacks is also done, to avoid some repetition of work. [Rekers 1992] Parser Generation for Interactive Environments, PhD thesis, University of Amsterdam, 1992. The <option>--filter</option> option You might have noticed this GLR-related option. It is an experimental feature intended to restrict the amount of structure retained in the forest by discarding everything not required for the semantic results. It may or it may not work, and may be fixed in a future release. Limitations and future work The parser supports hidden left recursion, but makes no attempt to handle cyclic grammars that have rules which do not consume any input. If you have a grammar like this, for example with rules like S -> S or S -> A S | x; A -> empty, the implementation will loop until you run out of stack - but if it will happen, it often happens quite quickly! The code has been used and tested frequently over the past few years, including being used in several undergraduate projects. It should be fairly stable, but as usual, can't be guaranteed bug-free. One day I will write it in Epigram! If you have suggestions for improvements, or requests for features, please contact Paul Callaghan. There are some changes I am considering, and some views and/or encouragement from users will be much appreciated. Further information can be found on Callaghan's GLR parser page. Thanks and acknowledgements Many thanks to the people who have used and tested this software in its various forms, including Julia Fischer, James Henderson, and Aarne Ranta. Attribute Grammars Introduction Attribute grammars are a formalism for expressing syntax directed translation of a context-free grammar. An introduction to attribute grammars may be found here. There is also an article in the Monad Reader about attribute grammars and a different approach to attribute grammars using Haskell here. The main practical difficulty that has prevented attribute grammars from gaining widespread use involves evaluating the attributes. Attribute grammars generate non-trivial data dependency graphs that are difficult to evaluate using mainstream languages and techniques. The solutions generally involve restricting the form of the grammars or using big hammers like topological sorts. However, a language which supports lazy evaluation, such as Haskell, has no problem forming complex data dependency graphs and evaluating them. The primary intellectual barrier to attribute grammar adoption seems to stem from the fact that most programmers have difficulty with the declarative nature of the specification. Haskell programmers, on the other hand, have already embraced a purely functional language. In short, the Haskell language and community seem like a perfect place to experiment with attribute grammars. Embedding attribute grammars in Happy is easy because because Haskell supports three important features: higher order functions, labeled records, and lazy evaluation. Attributes are encoded as fields in a labeled record. The parse result of each non-terminal in the grammar is a function which takes a record of inherited attributes and returns a record of synthesized attributes. In each production, the attributes of various non-terminals are bound together using let. Finally, at the end of the parse, a distinguished attribute is evaluated to be the final result. Lazy evaluation takes care of evaluating each attribute in the correct order, resulting in an attribute grammar system that is capable of evaluating a fairly large class of attribute grammars. Attribute grammars in Happy do not use any language extensions, so the parsers are Haskell 98 (assuming you don't use the GHC specific -g option). Currently, attribute grammars cannot be generated for GLR parsers (It's not exactly clear how these features should interact...) Attribute Grammars in Happy Declaring Attributes The presence of one or more %attribute directives indicates that a grammar is an attribute grammar. Attributes are calculated properties that are associated with the non-terminals in a parse tree. Each %attribute directive generates a field in the attributes record with the given name and type. The first %attribute directive in a grammar defines the default attribute. The default attribute is distinguished in two ways: 1) if no attribute specifier is given on an attribute reference, the default attribute is assumed (see ) and 2) the value for the default attribute of the starting non-terminal becomes the return value of the parse. Optionally, one may specify a type declaration for the attribute record using the %attributetype declaration. This allows you to define the type given to the attribute record and, more importantly, allows you to introduce type variables that can be subsequently used in %attribute declarations. If the %attributetype directive is given without any %attribute declarations, then the %attributetype declaration has no effect. For example, the following declarations: %attributetype { MyAttributes a } %attribute value { a } %attribute num { Int } %attribute label { String } would generate this attribute record declaration in the parser: data MyAttributes a = HappyAttributes { value :: a, num :: Int, label :: String } and value would be the default attribute. Semantic Rules In an ordinary Happy grammar, a production consists of a list of terminals and/or non-terminals followed by an uninterpreted code fragment enclosed in braces. With an attribute grammar, the format is very similar, but the braces enclose a set of semantic rules rather than uninterpreted Haskell code. Each semantic rule is either an attribute calculation or a conditional, and rules are separated by semicolonsNote that semantic rules must not rely on layout, because whitespace alignment is not guaranteed to be preserved. Both attribute calculations and conditionals may contain attribute references and/or terminal references. Just like regular Happy grammars, the tokens $1 through $<n>, where n is the number of symbols in the production, refer to subtrees of the parse. If the referenced symbol is a terminal, then the value of the reference is just the value of the terminal, the same way as in a regular Happy grammar. If the referenced symbol is a non-terminal, then the reference may be followed by an attribute specifier, which is a dot followed by an attribute name. If the attribute specifier is omitted, then the default attribute is assumed (the default attribute is the first attribute appearing in an %attribute declaration). The special reference $$ references the attributes of the current node in the parse tree; it behaves exactly like the numbered references. Additionally, the reference $> always references the rightmost symbol in the production. An attribute calculation rule is of the form: <attribute reference> = <Haskell expression> A rule of this form defines the value of an attribute, possibly as a function of the attributes of $$ (inherited attributes), the attributes of non-terminals in the production (synthesized attributes), or the values of terminals in the production. The value for an attribute can only be defined once for a particular production. The following rule calculates the default attribute of the current production in terms of the first and second items of the production (a synthesized attribute): $$ = $1 : $2 This rule calculates the length attribute of a non-terminal in terms of the length of the current non-terminal (an inherited attribute): $1.length = $$.length + 1 Conditional rules allow the rejection of strings due to context-sensitive properties. All conditional rules have the form: where <Haskell expression> For non-monadic parsers, all conditional expressions must be of the same (monomorphic) type. At the end of the parse, the conditionals will be reduced using seq, which gives the grammar an opportunity to call error with an informative message. For monadic parsers, all conditional statements must have type Monad m => m () where m is the monad in which the parser operates. All conditionals will be sequenced at the end of the parse, which allows the conditionals to call fail with an informative message. The following conditional rule will cause the (non-monadic) parser to fail if the inherited length attribute is not 0. where if $$.length == 0 then () else error "length not equal to 0" This conditional is the monadic equivalent: where unless ($$.length == 0) (fail "length not equal to 0") Limits of Happy Attribute Grammars If you are not careful, you can write an attribute grammar which fails to terminate. This generally happens when semantic rules are written which cause a circular dependency on the value of an attribute. Even if the value of the attribute is well-defined (that is, if a fixpoint calculation over attribute values will eventually converge to a unique solution), this attribute grammar system will not evaluate such grammars. One practical way to overcome this limitation is to ensure that each attribute is always used in either a top-down (inherited) fashion or in a bottom-up (synthesized) fashion. If the calculations are sufficiently lazy, one can "tie the knot" by synthesizing a value in one attribute, and then assigning that value to another, inherited attribute at some point in the parse tree. This technique can be useful for common tasks like building symbol tables for a syntactic scope and making that table available to sub-nodes of the parse. Example Attribute Grammars The following two toy attribute grammars may prove instructive. The first is an attribute grammar for the classic context-sensitive grammar { a^n b^n c^n | n >= 0 }. It demonstrates the use of conditionals, inherited and synthesized attributes. { module ABCParser (parse) where } %tokentype { Char } %token a { 'a' } %token b { 'b' } %token c { 'c' } %token newline { '\n' } %attributetype { Attrs a } %attribute value { a } %attribute len { Int } %name parse abcstring %% abcstring : alist blist clist newline { $$ = $1 ++ $2 ++ $3 ; $2.len = $1.len ; $3.len = $1.len } alist : a alist { $$ = $1 : $2 ; $$.len = $2.len + 1 } | { $$ = []; $$.len = 0 } blist : b blist { $$ = $1 : $2 ; $2.len = $$.len - 1 } | { $$ = [] ; where failUnless ($$.len == 0) "blist wrong length" } clist : c clist { $$ = $1 : $2 ; $2.len = $$.len - 1 } | { $$ = [] ; where failUnless ($$.len == 0) "clist wrong length" } { happyError = error "parse error" failUnless b msg = if b then () else error msg } This grammar parses binary numbers and calculates their value. It demonstrates the use of inherited and synthesized attributes. { module BitsParser (parse) where } %tokentype { Char } %token minus { '-' } %token plus { '+' } %token one { '1' } %token zero { '0' } %token newline { '\n' } %attributetype { Attrs } %attribute value { Integer } %attribute pos { Int } %name parse start %% start : num newline { $$ = $1 } num : bits { $$ = $1 ; $1.pos = 0 } | plus bits { $$ = $2 ; $2.pos = 0 } | minus bits { $$ = negate $2; $2.pos = 0 } bits : bit { $$ = $1 ; $1.pos = $$.pos } | bits bit { $$ = $1 + $2 ; $1.pos = $$.pos + 1 ; $2.pos = $$.pos } bit : zero { $$ = 0 } | one { $$ = 2^($$.pos) } { happyError = error "parse error" } Invoking <application>Happy</application> An invocation of Happy has the following syntax: $ happy [ options ] filename [ options ] All the command line options are optional (!) and may occur either before or after the input file name. Options that take arguments may be given multiple times, and the last occurrence will be the value used. There are two types of grammar files, file.y and file.ly, with the latter observing the reverse comment (or literate) convention (i.e. each code line must begin with the character >, lines which don't begin with > are treated as comments). The examples distributed with Happy are all of the .ly form. literate grammar files The flags accepted by Happy are as follows: file =file Specifies the destination of the generated parser module. If omitted, the parser will be placed in file.hs, where file is the name of the input file with any extension removed. file =file info file Directs Happy to produce an info file containing detailed information about the grammar, parser states, parser actions, and conflicts. Info files are vital during the debugging of grammars. The filename argument is optional (note that there's no space between -i and the filename in the short version), and if omitted the info file will be written to file.info (where file is the input file name with any extension removed). file =file pretty print Directs Happy to produce a file containing a pretty-printed form of the grammar, containing only the productions, withouth any semantic actions or type signatures. If no file name is provided, then the file name will be computed by replacing the extension of the input file with .grammar. dir =dir template files Instructs Happy to use this directory when looking for template files: these files contain the static code that Happy includes in every generated parser. You shouldn't need to use this option if Happy is properly configured for your computer. name =name Happy prefixes all the symbols it uses internally with either happy or Happy. To use a different string, for example if the use of happy is conflicting with one of your own functions, specify the prefix using the option. NOTE: the option is experimental and may cause unpredictable results. This option causes the right hand side of each production (the semantic value) to be evaluated eagerly at the moment the production is reduced. If the lazy behaviour is not required, then using this option will improve performance and may reduce space leaks. Note that the parser as a whole is never lazy - the whole input will always be consumed before any input is produced, regardless of the setting of the flag. GHC back-ends GHC Instructs Happy to generate a parser that uses GHC-specific extensions to obtain faster code. coerce back-ends coerce Use GHC's unsafeCoerce# extension to generate smaller faster parsers. Type-safety isn't compromised. This option may only be used in conjunction with . arrays back-ends arrays Instructs Happy to generate a parser using an array-based shift reduce parser. When used in conjunction with , the arrays will be encoded as strings, resulting in faster parsers. Without , standard Haskell arrays will be used. debug back-ends debug Generate a parser that will print debugging information to stderr at run-time, including all the shifts, reductions, state transitions and token inputs performed by the parser. This option can only be used in conjunction with . glr back-ends glr Generate a GLR parser for ambiguous grammars. decode Generate simple decoding code for GLR result. filter Filter the GLR parse forest with respect to semantic usage. Print usage information on standard output then exit successfully. Print version information on standard output then exit successfully. Note that for legacy reasons is supported, too, but the use of it is deprecated. will be used for verbose mode when it is actually implemented. Syntax of Grammar Files The input to Happy is a text file containing the grammar of the language you want to parse, together with some annotations that help the parser generator make a legal Haskell module that can be included in your program. This section gives the exact syntax of grammar files. The overall format of the grammar file is given below: <optional module header> <directives> %% <grammar> <optional module trailer> module header module trailer If the name of the grammar file ends in .ly, then it is assumed to be a literate script. All lines except those beginning with a > will be ignored, and the > will be stripped from the beginning of all the code lines. There must be a blank line between each code section (lines beginning with >) and comment section. Grammars not using the literate notation must be in a file with the .y suffix. Lexical Rules Identifiers in Happy grammar files must take the following form (using the BNF syntax from the Haskell Report): id ::= alpha { idchar } | ' { any{^'} | \' } ' | " { any{^"} | \" } " alpha ::= A | B | ... | Z | a | b | ... | z idchar ::= alpha | 0 | 1 | ... | 9 | _ Module Header module header This section is optional, but if included takes the following form: { <Haskell module header> } The Haskell module header contains the module name, exports, and imports. No other code is allowed in the header—this is because Happy may need to include its own import statements directly after the user defined header. Directives This section contains a number of lines of the form: %<directive name> <argument> ... The statements here are all annotations to help Happy generate the Haskell code for the grammar. Some of them are optional, and some of them are required. Token Type %tokentype { <valid Haskell type> } %tokentype (mandatory) The %tokentype directive gives the type of the tokens passed from the lexical analyser to the parser (in order that Happy can supply types for functions and data in the generated parser). Tokens %token <name> { <Haskell pattern> } <name> { <Haskell pattern> } ... %token (mandatory) The %token directive is used to tell Happy about all the terminal symbols used in the grammar. Each terminal has a name, by which it is referred to in the grammar itself, and a Haskell representation enclosed in braces. Each of the patterns must be of the same type, given by the %tokentype directive. The name of each terminal follows the lexical rules for Happy identifiers given above. There are no lexical differences between terminals and non-terminals in the grammar, so it is recommended that you stick to a convention; for example using upper case letters for terminals and lower case for non-terminals, or vice-versa. Happy will give you a warning if you try to use the same identifier both as a non-terminal and a terminal, or introduce an identifier which is declared as neither. To save writing lots of projection functions that map tokens to their components, you can include $$ in your Haskell pattern. For example: $$ %token INT { TokenInt $$ } ... This makes the semantic value of INT refer to the first argument of TokenInt rather than the whole token, eliminating the need for any projection function. Parser Name %name <Haskell identifier> [ <non-terminal> ] ... %name (optional) The %name directive is followed by a valid Haskell identifier, and gives the name of the top-level parsing function in the generated parser. This is the only function that needs to be exported from a parser module. If the %name directive is omitted, it defaults to happyParse. happyParse The %name directive takes an optional second parameter which specifies the top-level non-terminal which is to be parsed. If this parameter is omitted, it defaults to the first non-terminal defined in the grammar. Multiple %name directives may be given, specifying multiple parser entry points for this grammar (see ). When multiple %name directives are given, they must all specify explicit non-terminals. Partial Parsers %partial <Haskell identifier> [ <non-terminal> ] ... %partial The %partial directive can be used instead of %name. It indicates that the generated parser should be able to parse an initial portion of the input. In contrast, a parser specified with %name will only parse the entire input. A parser specified with %partial will stop parsing and return a result as soon as there exists a complete parse, and no more of the input can be parsed. It does this by accepting the parse if it is followed by the error token, rather than insisting that the parse is followed by the end of the token stream (or the eof token in the case of a %lexer parser). Monad Directive %monad { <type> } { <then> } { <return> } %monad (optional) The %monad directive takes three arguments: the type constructor of the monad, the then (or bind) operation, and the return (or unit) operation. The type constructor can be any type with kind * -> *. Monad declarations are described in more detail in . Lexical Analyser %lexer { <lexer> } { <eof> } %lexer (optional) The %lexer directive takes two arguments: <lexer> is the name of the lexical analyser function, and <eof> is a token that is to be treated as the end of file. Lexer declarations are described in more detail in . Precedence declarations %left <name> ... %right <name> ... %nonassoc <name> ... %left directive %right directive %nonassoc directive These declarations are used to specify the precedences and associativity of tokens. The precedence assigned by a %left, %right or %nonassoc declaration is defined to be higher than the precedence assigned by all declarations earlier in the file, and lower than the precedence assigned by all declarations later in the file. The associativity of a token relative to tokens in the same %left, %right, or %nonassoc declaration is to the left, to the right, or non-associative respectively. Precedence declarations are described in more detail in . Expect declarations %expect <number> %expect directive (optional) More often than not the grammar you write will have conflicts. These conflicts generate warnings. But when you have checked the warnings and made sure that Happy handles them correctly these warnings are just annoying. The %expect directive gives a way of avoiding them. Declaring %expect n is a way of telling Happy “There are exactly n shift/reduce conflicts and zero reduce/reduce conflicts in this grammar. I promise I have checked them and they are resolved correctly”. When processing the grammar, Happy will check the actual number of conflicts against the %expect declaration if any, and if there is a discrepancy then an error will be reported. Happy's %expect directive works exactly like that of yacc. Error declaration %error { <identifier> } %error Specifies the function to be called in the event of a parse error. The type of <identifier> varies depending on the presence of %lexer (see ) and %errorhandlertype (see the following). Additional error information %errorhandlertype (explist | default) %errorhandlertype (optional) The expected type of the user-supplied error handling can be applied with additional information. By default, no information is added, for compatibility with previous versions. However, if explist is provided with this directive, then the first application will be of type [String], providing a description of possible tokens that would not have failed the parser in place of the token that has caused the error. Attribute Type Declaration %attributetype { <valid Haskell type declaration> } %attributetype directive (optional) This directive allows you to declare the type of the attributes record when defining an attribute grammar. If this declaration is not given, Happy will choose a default. This declaration may only appear once in a grammar. Attribute grammars are explained in . Attribute declaration %attribute <Haskell identifier> { <valid Haskell type> } %attribute directive The presence of one or more of these directives declares that the grammar is an attribute grammar. The first attribute listed becomes the default attribute. Each %attribute directive generates a field in the attributes record with the given label and type. If there is an %attributetype declaration in the grammar which introduces type variables, then the type of an attribute may mention any such type variables. Attribute grammars are explained in . Grammar The grammar section comes after the directives, separated from them by a double-percent (%%) symbol. This section contains a number of productions, each of which defines a single non-terminal. Each production has the following syntax: %% <non-terminal> [ :: { <type> } ] : <id> ... {[%] <expression> } [ | <id> ... {[%] <expression> } ... ] The first line gives the non-terminal to be defined by the production and optionally its type (type signatures for productions are discussed in ). Each production has at least one, and possibly many right-hand sides. Each right-hand side consists of zero or more symbols (terminals or non-terminals) and a Haskell expression enclosed in braces. The expression represents the semantic value of the non-terminal, and may refer to the semantic values of the symbols in the right-hand side using the meta-variables $1 ... $n. It is an error to refer to $i when i is larger than the number of symbols on the right hand side of the current rule. The symbol $ may be inserted literally in the Haskell expression using the sequence \$ (this isn't necessary inside a string or character literal). Additionally, the sequence $> can be used to represent the value of the rightmost symbol. A semantic value of the form {% ... } is a monadic action, and is only valid when the grammar file contains a %monad directive (). Monadic actions are discussed in . monadic action Remember that all the expressions for a production must have the same type. Parameterized Productions Starting from version 1.17.1, Happy supports parameterized productions which provide a convenient notation for capturing recurring patterns in context free grammars. This gives the benefits of something similar to parsing combinators in the context of Happy grammars. This functionality is best illustrated with an example: opt(p) : p { Just $1 } | { Nothing } rev_list1(p) : p { [$1] } | rev_list1(p) p { $2 : $1 } The first production, opt, is used for optional components of a grammar. It is just like p? in regular expressions or EBNF. The second production, rev_list1, is for parsing a list of 1 or more occurrences of p. Parameterized productions are just like ordinary productions, except that they have parameter in parenthesis after the production name. Multiple parameters should be separated by commas: fst(p,q) : p q { $1 } snd(p,q) : p q { $2 } both(p,q) : p q { ($1,$2) } To use a parameterized production, we have to pass values for the parameters, as if we are calling a function. The parameters can be either terminals, non-terminals, or other instantiations of parameterized productions. Here are some examples: list1(p) : rev_list1(p) { reverse $1 } list(p) : list1(p) { $1 } | { [] } The first production uses rev_list to define a production that behaves like p+, returning a list of elements in the same order as they occurred in the input. The second one, list is like p*. Parameterized productions are implemented as a preprocessing pass in Happy: each instantiation of a production turns into a separate non-terminal, but are careful to avoid generating the same rule multiple times, as this would lead to an ambiguous grammar. Consider, for example, the following parameterized rule: sep1(p,q) : p list(snd(q,p)) { $1 : $2 } The rules that would be generated for sep1(EXPR,SEP) sep1(EXPR,SEP) : EXPR list(snd(SEP,EXPR)) { $1 : $2 } list(snd(SEP,EXPR)) : list1(snd(SEP,EXPR)) { $1 } | { [] } list1(snd(SEP,EXPR)) : rev_list1(snd(SEP,EXPR)) { reverse $1 } rev_list1(snd(SEP,EXPR)) : snd(SEP,EXPR)) { [$1] } | rev_list1(snd(SEP,EXPR)) snd(SEP,EXPR) { $2 : $1 } snd(SEP,EXPR) : SEP EXPR { $2 } Note that this is just a normal grammar, with slightly strange names for the non-terminals. A drawback of the current implementation is that it does not support type signatures for the parameterized productions, that depend on the types of the parameters. We plan to implement that in the future---the current workaround is to omit the type signatures for such rules. Module Trailer module trailer The module trailer is optional, comes right at the end of the grammar file, and takes the same form as the module header: { <Haskell code> } This section is used for placing auxiliary definitions that need to be in the same module as the parser. In small parsers, it often contains a hand-written lexical analyser too. There is no restriction on what can be placed in the module trailer, and any code in there is copied verbatim into the generated parser file. Info Files info files Happy info files, generated using the -i flag, are your most important tool for debugging errors in your grammar. Although they can be quite verbose, the general concept behind them is quite simple. An info file contains the following information: A summary of all shift/reduce and reduce/reduce conflicts in the grammar. Under section Grammar, a summary of all the rules in the grammar. These rules correspond directly to your input file, absent the actual Haskell code that is to be run for each rules. A rule is written in the form <non-terminal> -> <id> ... Under section Terminals, a summary of all the terminal tokens you may run against, as well as a the Haskell pattern which matches against them. This corresponds directly to the contents of your %token directive (). Under section Non-terminals, a summary of which rules apply to which productions. This is generally redundant with the Grammar section. The primary section States, which describes the state-machine Happy built for your grammar, and all of the transitions for each state. Finally, some statistics Grammar Totals at the end of the file. In general, you will be most interested in the States section, as it will give you information, in particular, about any conflicts your grammar may have. States Although Happy does its best to insulate you from the vagaries of parser generation, it's important to know a little about how shift-reduce parsers work in order to be able to interpret the entries in the States section. In general, a shift-reduce parser operates by maintaining parse stack, which tokens and productions are shifted onto or reduced off of. The parser maintains a state machine, which accepts a token, performs some shift or reduce, and transitions to a new state for the next token. Importantly, these states represent multiple possible productions, because in general the parser does not know what the actual production for the tokens it's parsing is going to be. There's no direct correspondence between the state-machine and the input grammar; this is something you have to reverse engineer. With this knowledge in mind, we can look at two example states from the example grammar from : State 5 Exp1 -> Term . (rule 5) Term -> Term . '*' Factor (rule 6) Term -> Term . '/' Factor (rule 7) in reduce using rule 5 '+' reduce using rule 5 '-' reduce using rule 5 '*' shift, and enter state 11 '/' shift, and enter state 12 ')' reduce using rule 5 %eof reduce using rule 5 State 9 Factor -> '(' . Exp ')' (rule 11) let shift, and enter state 2 int shift, and enter state 7 var shift, and enter state 8 '(' shift, and enter state 9 Exp goto state 10 Exp1 goto state 4 Term goto state 5 Factor goto state 6 For each state, the first set of lines describes the rules which correspond to this state. A period . is inserted in the production to indicate where, if this is indeed the correct production, we would have parsed up to. In state 5, there are multiple rules, so we don't know if we are parsing an Exp1, a multiplication or a division (however, we do know there is a Term on the parse stack); in state 9, there is only one rule, so we know we are definitely parsing a Factor. The next set of lines specifies the action and state transition that should occur given a token. For example, if in state 5 we process the '*' token, this token is shifted onto the parse stack and we transition to the state corresponding to the rule Term -> Term '*' . Factor (matching the token disambiguated which state we are in.) Finally, for states which shift on non-terminals, there will be a last set of lines saying what should be done after the non-terminal has been fully parsed; this information is effectively the stack for the parser. When a reduce occurs, these goto entries are used to determine what the next state should be. Interpreting conflicts When you have a conflict, you will see an entry like this in your info file: State 432 atype -> SIMPLEQUOTE '[' . comma_types0 ']' (rule 318) sysdcon -> '[' . ']' (rule 613) '_' shift, and enter state 60 'as' shift, and enter state 16 ... ']' shift, and enter state 381 (reduce using rule 328) ... On large, complex grammars, determining what the conflict is can be a bit of an art, since the state with the conflict may not have enough information to determine why a conflict is occurring). In some cases, the rules associated with the state with the conflict will immediately give you enough guidance to determine what the ambiguous syntax is. For example, in the miniature shift/reduce conflict described in , the conflict looks like this: State 13 exp -> exp . '+' exp0 (rule 1) exp0 -> if exp then exp else exp . (rule 3) then reduce using rule 3 else reduce using rule 3 '+' shift, and enter state 7 (reduce using rule 3) %eof reduce using rule 3 Here, rule 3 makes it easy to imagine that we had been parsing a statement like if 1 then 2 else 3 + 4; the conflict arises from whether or not we should shift (thus parsing as if 1 then 2 else (3 + 4)) or reduce (thus parsing as (if 1 then 2 else 3) + 4). Sometimes, there's not as much helpful context in the error message; take this abridged example from GHC's parser: State 49 type -> btype . (rule 281) type -> btype . '->' ctype (rule 284) '->' shift, and enter state 472 (reduce using rule 281) A pair of rules like this doesn't always result in a shift/reduce conflict: to reduce with rule 281 implies that, in some context when parsing the non-terminal type, it is possible for an '->' to occur immediately afterwards (indeed these source rules are factored such that there is no rule of the form ... -> type '->' ...). The best way this author knows how to sleuth this out is to look for instances of the token and check if any of the preceeding non-terminals could terminate in a type: texp -> exp '->' texp (500) exp -> infixexp '::' sigtype (414) sigtype -> ctype (260) ctype -> type (274) As it turns out, this shift/reduce conflict results from ambiguity for view patterns, as in the code sample case v of { x :: T -> T ... }. Tips This section contains a lot of accumulated lore about using Happy. Performance Tips How to make your parser go faster: If you are using GHC GHC , generate parsers using the -a -g -c options, and compile them using GHC with the -fglasgow-exts option. This is worth a lot, in terms of compile-time, execution speed and binary size.omitting the -a may generate slightly faster parsers, but they will be much bigger. The lexical analyser is usually the most performance critical part of a parser, so it's worth spending some time optimising this. Profiling tools are essential here. In really dire circumstances, resort to some of the hacks that are used in the Glasgow Haskell Compiler's interface-file lexer. Simplify the grammar as much as possible, as this reduces the number of states and reduction rules that need to be applied. Use left recursion rather than right recursion recursion, left vs. right wherever possible. While not strictly a performance issue, this affects the size of the parser stack, which is kept on the heap and thus needs to be garbage collected. Compilation-Time Tips We have found that compiling parsers generated by Happy can take a large amount of time/memory, so here's some tips on making things more sensible: Include as little code as possible in the module trailer. This code is included verbatim in the generated parser, so if any of it can go in a separate module, do so. Give type signatures type signatures in grammar for everything (see . This is reported to improve things by about 50%. If there is a type signature for every single non-terminal in the grammar, then Happy automatically generates type signatures for most functions in the parser. Simplify the grammar as much as possible (applies to everything, this one). Use a recent version of GHC. Versions from 4.04 onwards have lower memory requirements for compiling Happy-generated parsers. Using Happy's -g -a -c options when generating parsers to be compiled with GHC will help considerably. Finding Type Errors type errors, finding Finding type errors in grammar files is inherently difficult because the code for reductions is moved around before being placed in the parser. We currently have no way of passing the original filename and line numbers to the Haskell compiler, so there is no alternative but to look at the parser and match the code to the grammar file. An info file (generated by the -i option) can be helpful here. type signatures in grammar Type signature sometimes help by pinning down the particular error to the place where the mistake is made, not half way down the file. For each production in the grammar, there's a bit of code in the generated file that looks like this: HappyAbsSyn<n> ( E ) HappyAbsSyn where E is the Haskell expression from the grammar file (with $n replaced by happy_var_n). If there is a type signature for this production, then Happy will have taken it into account when declaring the HappyAbsSyn datatype, and errors in E will be caught right here. Of course, the error may be really caused by incorrect use of one of the happy_var_n variables. (this section will contain more info as we gain experience with creating grammar files. Please send us any helpful tips you find.) Conflict Tips conflicts Conflicts arise from ambiguities in the grammar. That is, some input sequences may possess more than one parse. Shift/reduce conflicts are benign in the sense that they are easily resolved (Happy automatically selects the shift action, as this is usually the intended one). Reduce/reduce conflicts are more serious. A reduce/reduce conflict implies that a certain sequence of tokens on the input can represent more than one non-terminal, and the parser is uncertain as to which reduction rule to use. It will select the reduction rule uppermost in the grammar file, so if you really must have a reduce/reduce conflict you can select which rule will be used by putting it first in your grammar file. It is usually possible to remove conflicts from the grammar, but sometimes this is at the expense of clarity and simplicity. Here is a cut-down example from the grammar of Haskell (1.2): exp : exp op exp0 | exp0 exp0 : if exp then exp else exp ... | atom atom : var | integer | '(' exp ')' ... This grammar has a shift/reduce conflict, due to the following ambiguity. In an input such as if 1 then 2 else 3 + 4 the grammar doesn't specify whether the parse should be if 1 then 2 else (3 + 4) or (if 1 then 2 else 3) + 4 and the ambiguity shows up as a shift/reduce conflict on reading the 'op' symbol. In this case, the first parse is the intended one (the 'longest parse' rule), which corresponds to the shift action. Removing this conflict relies on noticing that the expression on the left-hand side of an infix operator can't be an exp0 (the grammar previously said otherwise, but since the conflict was resolved as shift, this parse was not allowed). We can reformulate the exp rule as: exp : atom op exp | exp0 and this removes the conflict, but at the expense of some stack space while parsing (we turned a left-recursion into a right-recursion). There are alternatives using left-recursion, but they all involve adding extra states to the parser, so most programmers will prefer to keep the conflict in favour of a clearer and more efficient parser. LALR(1) parsers There are three basic ways to build a shift-reduce parser. Full LR(1) (the `L' is the direction in which the input is scanned, the `R' is the way in which the parse is built, and the `1' is the number of tokens of lookahead) generates a parser with many states, and is therefore large and slow. SLR(1) (simple LR(1)) is a cut-down version of LR(1) which generates parsers with roughly one-tenth as many states, but lacks the power to parse many grammars (it finds conflicts in grammars which have none under LR(1)). LALR(1) (look-ahead LR(1)), the method used by Happy and yacc, is a tradeoff between the two. An LALR(1) parser has the same number of states as an SLR(1) parser, but it uses a more complex method to calculate the lookahead tokens that are valid at each point, and resolves many of the conflicts that SLR(1) finds. However, there may still be conflicts in an LALR(1) parser that wouldn't be there with full LR(1). Using Happy with <application>GHCi</application> GHCi GHCi's compilation manager doesn't understand Happy grammars, but with some creative use of macros and makefiles we can give the impression that GHCi is invoking Happy automatically: Create a simple makefile, called Makefile_happysrcs: HAPPY = happy HAPPY_OPTS = all: MyParser.hs %.hs: %.y $(HAPPY) $(HAPPY_OPTS) $< -o $@ Create a macro in GHCi to replace the :reload command, like so (type this all on one line): :def myreload (\_ -> System.system "make -f Makefile_happysrcs" >>= \rr -> case rr of { System.ExitSuccess -> return ":reload" ; _ -> return "" }) Use :myreload (:my will do) instead of :reload (:r). Basic monadic Happy use with Alex Alex monad Alex lexers are often used by Happy parsers, for example in GHC. While many of these applications are quite sophisticated, it is still quite useful to combine the basic Happy %monad directive with the Alex monad wrapper. By using monads for both, the resulting parser and lexer can handle errors far more gracefully than by throwing an exception. The most straightforward way to use a monadic Alex lexer is to simply use the Alex monad as the Happy monad: Lexer.x { module Lexer where } %wrapper "monad" tokens :- ... { data Token = ... | EOF deriving (Eq, Show) alexEOF = return EOF } Parser.y { module Parser where import Lexer } %name pFoo %tokentype { Token } %error { parseError } %monad { Alex } { >>= } { return } %lexer { lexer } { EOF } %token ... %% ... parseError :: Token -> Alex a parseError _ = do ((AlexPn _ line column), _, _, _) <- alexGetInput alexError ("parse error at line " ++ (show line) ++ ", column " ++ (show column)) lexer :: (Token -> Alex a) -> Alex a lexer = (alexMonadScan >>=) } We can then run the finished parser in the Alex monad using runAlex, which returns an Either value rather than throwing an exception in case of a parse or lexical error: import qualified Lexer as Lexer import qualified Parser as Parser parseFoo :: String -> Either String Foo parseFoo s = Lexer.runAlex s Parser.pFoo
happy-1.20.1.1/examples/0000755000000000000000000000000007346545000013062 5ustar0000000000000000happy-1.20.1.1/examples/Calc.ly0000644000000000000000000000666607346545000014310 0ustar0000000000000000> { > module Calc where > import Char > } First thing to declare is the name of your parser, and the type of the tokens the parser reads. > %name calc > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } The left hand side are the names of the terminals or tokens, and the right hand side is how to pattern match them. Like yacc, we include %% here, for no real reason. > %% Now we have the production rules. > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declare this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError :: [Token] -> a > happyError _ = error ("Parse error\n") Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. runCalc :: String -> Exp runCalc = calc . lexer Here we test our parser. main = case runCalc "1 + 2 + 3" of { (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> case runCalc "1 * 2 + 3" of { (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> case runCalc "1 + 2 * 3" of { (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> case runCalc "let x = 2 in x * (x - 2)" of { (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "AndysTest works\n" ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } quit = print "runCalc failed\n" > } happy-1.20.1.1/examples/DavesExample.ly0000644000000000000000000000410407346545000016005 0ustar0000000000000000Parses simple lambda expressions to combinators > { > module Parser where > import Lexer > import Convert > import PreludeGlaArray > } > %name parse > %tokentype { Token } > %token idT { Ident $$ } > numT { Number $$ } > boolT { Boolean $$ } > "(" { LeftBracket } > ")" { RightBracket } > "[" { LeftSquare } > "]" { RightSquare } > "[]" { EmptyList } > ";" { SemiColon } > ":" { Colon } > "+" { Infix "+" } > "-" { Infix "-" } > "/" { Infix "/" } > "*" { Infix "*" } > "==" { Infix "==" } > "/=" { Infix "/=" } > ">" { Infix ">" } > "<" { Infix "<" } > ">=" { Infix ">=" } > "<=" { Infix "<=" } > "=" { Builtin "=" } > "else" { Builtin "else" } > "if" { Builtin "if" } > "in" { Builtin "in" } > "let" { Builtin "let" } > "then" { Builtin "then" } > "end" { Builtin "end" } > %% > P : "let" Dec "in" B { mkLet $2 $4} > | "if" B "then" B "else" B { mkIf $2 $4 $6} > | B { $1 } > B :: { Seq (Ptr Exp) } > B : E "==" E { mkOp $1 Equ $3 } > | E "/=" E { mkOp $1 NEq $3 } > | E ">" E { mkOp $1 GT $3 } > | E "<" E { mkOp $1 LT $3 } > | E ">=" E { mkOp $1 GTE $3 } > | E "<=" E { mkOp $1 LTE $3 } > | E { $1 } > E :: { Seq (Ptr Exp) } > E : E "+" T { mkOp $1 Add $3} > | E "-" T { mkOp $1 Sub $3} > | T { $1 } > T :: { Seq (Ptr Exp) } > T : T "*" F { mkOp $1 Mul $3 } > | T "/" F { mkOp $1 Quo $3 } > | F { $1 } > F :: { Seq (Ptr Exp) } > F : "(" B ")" { $2 } > | numT { mkNum $1 } > | boolT { mkBool $1 } > | idT { newPtr (mkVar $1) } > | Apps { mkApps $1 } > Apps :: { Seq [Ptr Exp] } > Apps : F Apps { mkApp $1 $2 } > | F { mkAtom $1 } > Dec :: { (Token,Seq (Ptr Exp)) } > Dec : idT Args "=" B { ($1, mkFun $1 $2 $4) } > Args :: { [Exp] } > Args : idT Args { mkVar $1 : $2} > | { [] } > { > happyError :: Text a => a -> b > happyError x = error ("Parse error, line " ++ show x ++ "\n") > } happy-1.20.1.1/examples/ErlParser.ly0000644000000000000000000002313707346545000015335 0ustar0000000000000000----------------------------------------------------------------------------- $Id: ErlParser.ly,v 1.2 1997/09/24 10:11:23 simonm Exp $ Syntactic analyser for Erlang Copyright : (c) 1996 Ellemtel Telecommunications Systems Laborotories, Sweden Author : Simon Marlow ----------------------------------------------------------------------------- > { > module Parser (parse) where > import GenUtils > import Lexer > import AbsSyn > import Types > import ParseMonad > } > %token > atom { T_Atom $$ } > var { T_Var $$ } > int { T_Int $$ } > float { T_Float $$ } > string { T_String $$ } > 'bor' { T_Bor } > 'bxor' { T_Bxor } > 'bsl' { T_Bsl } > 'bsr' { T_Bsr } > 'div' { T_Div } > 'rem' { T_Rem } > 'band' { T_Band } > 'bnot' { T_Bnot } > 'begin' { T_Begin } > 'end' { T_End } > 'catch' { T_Catch } > 'case' { T_Case } > 'of' { T_Of } > 'if' { T_If } > 'receive' { T_Receive } > 'after' { T_After } > 'when' { T_When } > 'fun' { T_Fun } > 'true' { T_True } > 'deftype' { T_DefType } > 'type' { T_Type } > '+' { T_Plus } > '-' { T_Minus } > '*' { T_Mult } > '/' { T_Divide } > '==' { T_Eq } > '/=' { T_Neq } > '=<' { T_Leq } > '<' { T_Lt } > '>=' { T_Geq } > '>' { T_Gt } > '=:=' { T_ExactEq } > '=/=' { T_ExactNeq } > '!' { T_Pling } > '=' { T_Equals } > '[' { T_LSquare } > ']' { T_RSquare } > '(' { T_LParen } > ')' { T_RParen } > '{' { T_LCurly } > '}' { T_RCurly } > ',' { T_Comma } > ';' { T_SemiColon } > '|' { T_Bar } > ':' { T_Colon } > '->' { T_Arrow } > '.' { T_Dot } > '\\' { T_BackSlash } > header_prog { T_Prog } > header_iface { T_Interface } > %monad { P } { thenP } { returnP } > %lexer { lexer } { T_EOF } > %name parse > %tokentype { Token } > %% > parse :: { ProgOrInterface } > : header_prog program { It's_a_prog $2 } > | header_iface interface { It's_an_iface $2 } > program :: { [Form] } > : { [] } > | form program { $1 : $2 } > add_op :: { BinOp } > : '+' { O_Add } > | '-' { O_Subtract } > | 'bor' { O_Bor } > | 'bxor' { O_Bxor } > | 'bsl' { O_Bsl } > | 'bsr' { O_Bsr } > comp_op :: { CompOp } > : '==' { O_Eq } > | '/=' { O_Neq } > | '=<' { O_Leq } > | '<' { O_Lt } > | '>=' { O_Geq } > | '>' { O_Gt } > | '=:=' { O_ExactEq } > | '=/=' { O_ExactNeq } > mult_op :: { BinOp } > : '*' { O_Multiply } > | '/' { O_Divide } > | 'div' { O_Div } > | 'rem' { O_Rem } > | 'band' { O_Band } > prefix_op :: { UnOp } > : '+' { O_Plus } > | '-' { O_Negate } > | 'bnot' { O_Bnot } > basic_type :: { Expr } > : atm { E_Atom $1 } > | int { E_Int $1 } > | float { E_Float $1 } > | string { foldr E_Cons E_Nil (map (E_Int . ord) $1) } > | var { E_Var $1 } > pattern :: { Expr } > : basic_type { $1 } > | '[' ']' { E_Nil } > | '[' pattern pat_tail ']' { E_Cons $2 $3 } > | '{' '}' { E_Tuple [] } > | '{' patterns '}' { E_Tuple $2 } > | atm '{' patterns '}' { E_Struct $1 $3 } > pat_tail :: { Expr } > : '|' pattern { $2 } > | ',' pattern pat_tail { E_Cons $2 $3 } > | { E_Nil } > patterns :: { [ Expr ] } > : pattern { [ $1 ] } > | pattern ',' patterns { $1 : $3 } > expr :: { Expr } > : 'catch' expr { E_Catch $2 } > | 'fun' '(' formal_param_list ')' '->' expr 'end' { E_Fun $3 $6 } > | 'fun' var '/' int { E_FunName (LocFun $2 $4) } > | 'fun' var ':' var '/' int { E_FunName (ExtFun $2 $4 $6) } > | expr200 { $1 } > expr200 :: { Expr } > : expr300 '=' expr { E_Match $1 $3 } > | expr300 '!' expr { E_Send $1 $3 } > | expr300 { $1 } > expr300 :: { Expr } > : expr300 add_op expr400 { E_BinOp $2 $1 $3 } > | expr400 { $1 } > expr400 :: { Expr } > : expr400 mult_op expr500 { E_BinOp $2 $1 $3 } > | expr500 { $1 } > expr500 :: { Expr } > : prefix_op expr0 { E_UnOp $1 $2 } > | expr0 { $1 } > expr0 :: { Expr } > : basic_type { $1 } > | '[' ']' { E_Nil } > | '[' expr expr_tail ']' { E_Cons $2 $3 } > | '{' maybeexprs '}' { E_Tuple $2 } > | atm '{' maybeexprs '}' { E_Struct $1 $3 } > | atm '(' maybeexprs ')' { E_Call (LocFun $1 (length $3)) $3 } > | atm ':' atm '(' maybeexprs ')' > { E_Call (ExtFun $1 $3 (length $5)) $5 } > | '(' expr ')' { $2 } > | 'begin' exprs 'end' { E_Block $2 } > | 'case' expr 'of' cr_clauses 'end' { E_Case $2 $4 } > | 'if' if_clauses 'end' { E_If $2 } > | 'receive' 'after' expr '->' exprs 'end' > { E_Receive [] (Just ($3,$5)) } > | 'receive' cr_clauses 'end' { E_Receive $2 Nothing } > | 'receive' cr_clauses 'after' expr '->' exprs 'end' > { E_Receive $2 (Just ($4,$6)) } > expr_tail :: { Expr } > : '|' expr { $2 } > | ',' expr expr_tail { E_Cons $2 $3 } > | { E_Nil } > cr_clause :: { CaseClause } > : expr clause_guard '->' exprs { ($1,$2,$4) } > clause_guard :: { [ GuardTest ] } > : 'when' guard { $2 } > | { [] } > cr_clauses :: { [ CaseClause ] } > : cr_clause { [ $1 ] } > | cr_clause ';' cr_clauses { $1 : $3 } > if_clause :: { IfClause } > : guard '->' exprs { ($1,$3) } > if_clauses :: { [ IfClause ] } > : if_clause { [ $1 ] } > | if_clause ';' if_clauses { $1 : $3 } > maybeexprs :: { [ Expr ] } > : exprs { $1 } > | { [] } > exprs :: { [ Expr ] } > : expr { [ $1 ] } > | expr ',' exprs { $1 : $3 } > guard_test :: { GuardTest } > : atm '(' maybeexprs ')' { G_Bif $1 $3 } > | expr300 comp_op expr300 { G_Cmp $2 $1 $3 } > guard_tests :: { [ GuardTest ] } > : guard_test { [ $1 ] } > | guard_test ',' guard_tests { $1 : $3 } > guard :: { [ GuardTest ] } > : 'true' { [] } > | guard_tests { $1 } > function_clause :: { FunctionClause } > : atm '(' formal_param_list ')' clause_guard '->' exprs > { (LocFun $1 (length $3),$3,$5,$7) } > formal_param_list :: { [ Expr ] } > : { [] } > | patterns { $1 } > function :: { Function } > : function_clause { [ $1 ] } > | function_clause ';' function { $1 : $3 } > attribute :: { Attribute } > : pattern { A_Pat $1 } > | '[' farity_list ']' { A_Funs $2 } > | atm ',' '[' maybe_farity_list ']' { A_AtomAndFuns $1 $4 } > maybe_farity_list :: { [ Fun ] } > : farity_list { $1 } > | { [] } > farity_list :: { [ Fun ] } > : farity { [ $1 ] } > | farity ',' farity_list { $1 : $3 } > farity :: { Fun } > : atm '/' int { LocFun $1 $3 } > form :: { Form } > : '-' atm '(' attribute ')' '.' { F_Directive $2 $4 } > | '-' 'type' sigdef '.' { $3 } > | '-' 'deftype' deftype '.' { $3 } > | function '.' { F_Function $1 } > abstype :: { Form } > : atm '(' maybetyvars ')' maybeconstraints > { F_AbsTypeDef (Tycon $1 (length $3)) $3 (snd $5) } > deftype :: { Form } > : atm '(' maybetyvars ')' '=' utype maybeconstraints > { F_TypeDef (Tycon $1 (length $3)) $3 $6 (fst $7) (snd $7) } > sigdef :: { Form } > : atm '(' maybeutypes ')' '->' utype maybeconstraints > { F_TypeSig ($1,length $3) $3 $6 (fst $7) (snd $7) } > header :: { (String,Int,[UType]) } > : atm '(' maybeutypes ')' { ($1, length $3, $3) } > tycon_args :: { [ TyVar ] } > : tycon_args ',' var { STyVar $3 : $1 } > | var { [ STyVar $1 ] } ----------------------------------------------------------------------------- Interfaces & Types > interface :: { (Module, [ Form ]) } > : '-' atm '(' atm ')' '.' signatures > { ($4, $7) } > signatures :: { [ Form ] } > : signatures typedef '.' { $2 : $1 } > | { [] } > typedef :: { Form } > typedef > : '-' 'deftype' deftype { $3 } > | '-' 'deftype' abstype { $3 } > | sigdef { $1 } > maybeconstraints :: { ([Constraint], [VarConstraint]) } > : 'when' constraints { splitConstraints $2 } > | { ([],[]) } > constraints :: { [ VarOrTypeCon ] } > : constraints ';' constraint { $1 ++ $3 } > | constraint { $1 } > constraint :: { [ VarOrTypeCon ] } > : utype '<' '=' utype { [TypeCon ($1,$4)] } > | utype '=' utype { [TypeCon ($1,$3),TypeCon($3,$1)] } > | var '\\' tags { [VarCon (STyVar $1,(canonTags $3))] } > maybeutypes :: { [ UType ] } > : utypes { reverse $1 } > | { [] } > utypes :: { [ UType ] } > : utypes ',' utype { $3 : $1 } > | utype { [$1] } > maybetyvars :: { [ TyVar ] } > : tyvars { reverse $1 } > | { [] } > tyvars :: { [ TyVar ] } > : tyvars ',' var { STyVar $3 : $1 } > | var { [ STyVar $1 ] } > utype :: { UType } > : ptypes { U (reverse $1) [] } > | ptypes '|' tyvar { U (reverse $1) [$3] } > | tyvar { U [] [$1] } > | '(' utype ')' { $2 } > | '(' ')' { U [] [] } > tyvar :: { TaggedTyVar } > : var { TyVar [] (STyVar $1) } > | int { if $1 /= 1 then > error "Illegal type variable" > else universalTyVar } > | int '\\' tags { if $1 /= 1 then > error "Illegal type variable" > else partialUniversalTyVar $3 } > ptypes :: { [ PType ] } > : ptypes '|' ptype { $3 : $1 } > | ptype { [$1] } > ptype :: { PType } > : atm '(' ')' { conToType $1 [] } > | atm '(' utypes ')' { conToType $1 (reverse $3) } > | atm { TyAtom $1 } > | '{' utypes '}' { tytuple (reverse $2) } > | atm '{' maybeutypes '}' { TyStruct $1 $3 } > | '[' utype ']' { tylist $2 } > taglist :: { [ Tag ] } > : taglist ',' tag { $3 : $1 } > | tag { [ $1 ] } > tags :: { [ Tag ] } > : tag { [ $1 ] } > | '(' taglist ')' { $2 } > tag :: { Tag } > : atm '(' ')' { conToTag $1 } > | atm { TagAtom $1 } > | atm '/' int { TagStruct $1 $3 } > | '{' int '}' { tagtuple $2 } > | '[' ']' { taglist } Horrible - keywords that can be atoms too. > atm :: { String } > : atom { $1 } > | 'true' { "true" } > | 'deftype' { "deftype" } > | 'type' { "type" } > { > utypeToVar (U [] [TyVar [] x]) = x > utypeToVar _ = error "Type constructor arguments must be variables\n" > happyError :: P a > happyError s line = failP (show line ++ ": Parse error\n") s line > } happy-1.20.1.1/examples/ErrorTest.ly0000644000000000000000000001044207346545000015362 0ustar0000000000000000----------------------------------------------------------------------------- Test for monadic Happy Parsers, Simon Marlow 1996. > { > import Char > } > %name calc > %tokentype { Token } > %monad { P } { thenP } { returnP } > %lexer { lexer } { TokenEOF } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp {% \s l -> ParseOk (Let l $2 $4 $6) } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > | error { Term (Factor (Int 1)) } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves three purposes: * it passes the input string around * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P a = String -> Int -> ParseResult a > thenP :: P a -> (a -> P b) -> P b > m `thenP` k = \s l -> > case m s l of > ParseFail s -> ParseFail s > ParseOk a -> k a s l > returnP :: a -> P a > returnP a = \s l -> ParseOk a ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let Int String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > lexer :: (Token -> P a) -> P a > lexer cont s = case s of > [] -> cont TokenEOF [] > ('\n':cs) -> \line -> lexer cont cs (line+1) > (c:cs) > | isSpace c -> lexer cont cs > | isAlpha c -> lexVar (c:cs) > | isDigit c -> lexNum (c:cs) > ('=':cs) -> cont TokenEq cs > ('+':cs) -> cont TokenPlus cs > ('-':cs) -> cont TokenMinus cs > ('*':cs) -> cont TokenTimes cs > ('/':cs) -> cont TokenDiv cs > ('(':cs) -> cont TokenOB cs > (')':cs) -> cont TokenCB cs > where > lexNum cs = cont (TokenInt (read num)) rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> cont TokenLet rest > ("in",rest) -> cont TokenIn rest > (var,rest) -> cont (TokenVar var) rest > runCalc :: String -> Exp > runCalc s = case calc s 1 of > ParseOk e -> e > ParseFail s -> error s ----------------------------------------------------------------------------- The following functions should be defined for all parsers. This is the overall type of the parser. > type Parse = P Exp > calc :: Parse The next function is called when a parse error is detected. It has the same type as the top-level parse function. > happyError :: P a > happyError = \s i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "+ 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let 1 "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-1.20.1.1/examples/LexerTest.ly0000644000000000000000000001014407346545000015347 0ustar0000000000000000----------------------------------------------------------------------------- Test for monadic Happy Parsers, Simon Marlow 1996. > { > import Char > } > %name calc > %tokentype { Token } > %monad { P } { thenP } { returnP } > %lexer { lexer } { TokenEOF } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp {% \s l -> ParseOk (Let l $2 $4 $6) } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves three purposes: * it passes the input string around * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P a = String -> Int -> ParseResult a > thenP :: P a -> (a -> P b) -> P b > m `thenP` k = \s l -> > case m s l of > ParseFail s -> ParseFail s > ParseOk a -> k a s l > returnP :: a -> P a > returnP a = \s l -> ParseOk a ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let Int String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > lexer :: (Token -> P a) -> P a > lexer cont s = case s of > [] -> cont TokenEOF [] > ('\n':cs) -> \line -> lexer cont cs (line+1) > (c:cs) > | isSpace c -> lexer cont cs > | isAlpha c -> lexVar (c:cs) > | isDigit c -> lexNum (c:cs) > ('=':cs) -> cont TokenEq cs > ('+':cs) -> cont TokenPlus cs > ('-':cs) -> cont TokenMinus cs > ('*':cs) -> cont TokenTimes cs > ('/':cs) -> cont TokenDiv cs > ('(':cs) -> cont TokenOB cs > (')':cs) -> cont TokenCB cs > where > lexNum cs = cont (TokenInt (read num)) rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> cont TokenLet rest > ("in",rest) -> cont TokenIn rest > (var,rest) -> cont (TokenVar var) rest > runCalc :: String -> Exp > runCalc s = case calc s 1 of > ParseOk e -> e > ParseFail s -> error s ----------------------------------------------------------------------------- The following functions should be defined for all parsers. This is the overall type of the parser. > calc :: P Exp The next function is called when a parse error is detected. It has the same type as the top-level parse function. > happyError :: P a > happyError = \s i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let 1 "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-1.20.1.1/examples/MonadTest.ly0000644000000000000000000000706507346545000015336 0ustar0000000000000000----------------------------------------------------------------------------- Tests %monad without %lexer. > { > import Char > } > %name calc > %tokentype { Token } > %monad { P } { thenP } { returnP } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves two purposes: * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P a = Int -> ParseResult a > thenP :: P a -> (a -> P b) -> P b > m `thenP` k = \l -> > case m l of > ParseFail s -> ParseFail s > ParseOk a -> k a l > returnP :: a -> P a > returnP a = \l -> ParseOk a ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest > runCalc :: String -> Exp > runCalc s = case calc (lexer s) 1 of > ParseOk e -> e > ParseFail s -> error s > happyError = \tks i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-1.20.1.1/examples/PgnParser.ly0000644000000000000000000000367007346545000015337 0ustar0000000000000000> { > module PgnParser (pgnMoveParser,pgnParser) where > import GenUtils > import OrdFM > import Board > import PgnTypes > } > %name pgnMoveParser > %tokentype { Token } > %token > str { StringToken $$ } > result { ResultToken $$ } > nag { NAGToken $$ } > tag { TagToken $$ } > comment { CommentToken $$ } > ']' { RightSBToken } > '(' { LeftRBToken } > ')' { RightRBToken } > '<' { LeftABToken } > '>' { RightABToken } > num { IntToken $$ } > '.' { PeriodToken } > move { PlyToken $$ } > %newline { NewlineToken } > %% You either parse a set of PGN games, or just a set of moves. > moves :: { AbsMove } > moves : opt_mv_num line_no move nags opt_comment analmoves opt_comment > more_moves > { AbsMove $1 $2 $3 $4 ($5++$7) $6 Nothing $8 } > | opt_mv_num line_no move nags opt_comment more_moves > { AbsMove $1 $2 $3 $4 $5 [] Nothing $6 } > | opt_mv_num line_no move '<' raw_moves '>' more_moves > { AbsMove $1 $2 $3 [] [] [] (Just $5) $7 } > more_moves :: { AbsMove } > more_moves > : moves { $1 } > | result { AbsResult $1 } > | { AbsEnd } > nags :: { [Int] } > nags : nag nags { $1 : $2 } > | { [] } > opt_mv_num :: { Maybe MoveNumber } > opt_mv_num > : num '.' '.' '.' { Just (MoveNumber $1 Black) } > | num '.' { Just (MoveNumber $1 White) } > | { Nothing } > mv_num :: { MoveNumber } > mv_num > : num '.' '.' '.' { (MoveNumber $1 Black) } > | num '.' { (MoveNumber $1 White) } > opt_comment :: { [String] } > opt_comment > : comment { $1 } > | { [] } > analmoves :: { [AbsMove] } > analmoves > : '(' moves ')' analmoves { $2 : $4 } > | '(' moves ')' { [$2] } > line_no :: { LineNo } > line_no > : { $# } > raw_moves :: { [AbsPly] } > raw_moves > : move raw_moves { $1 : $2 } > | { [] } > { > pgnParser = pgnGameMap pgnMoveParser > happyError :: Int -> [Token] -> a > happyError i xs = > error ("Parse error in line " ++ show i ++ "\n" > ++ show (take 10 xs)) > } happy-1.20.1.1/examples/README0000644000000000000000000000160507346545000013744 0ustar0000000000000000These are a few examples of Happy parsers, taken from various sources. The are intended as illustrations, not as working, complete examples, as some require functions and datatypes imported from other sources. Calc.ly : The calculator example from the Happy manual DavesExample.ly : Parses simple lambda expressions to combinators SimonsExample.ly : Another lambda expression parser ErlParser.ly : A parser for Erlang MonadTest.ly : Demonstrates use of %monad LexerTest.ly : Demonstrates use of %monad and %lexer ErrorTest.ly : Demonstrates use of the 'error' token There are also a few more examples under happy/tests. A full Haskell 98 parser written using Happy is available from the GHC CVS repository in fptools/hslibs/hssource. See http://www.haskell.org/ghc/ for instructions on how to access the GHC CVS repository. -- Simon Marlow happy-1.20.1.1/examples/SimonsExample.ly0000644000000000000000000000320307346545000016212 0ustar0000000000000000> { > module Parser (parse) where > import Type > import Lexer > } > %token > backslash { Builtin "\\" } > var { Ident $$ } > rightarrow { Builtin "->" } > caseT { Builtin "case" } > letT { Builtin "let" } > ofT { Builtin "of" } > inT { Builtin "in" } > letnT { Builtin "letn" } > leftcurly { LeftCurly } > rightcurly { RightCurly } > equals { Builtin "=" } > colon { Builtin ":" } > cons { Constructor $$ } > leftbracket { LeftBracket } > rightbracket { RightBracket } > semicolon { SemiColon } > percent { Percent } > %name parse > %tokentype { Token } > %% > expr > : backslash var binders rightarrow expr > { foldr Lambda $5 ($2: reverse $3) } > | caseT var ofT leftcurly patterns rightcurly > { Case $2 (reverse $5) } > | letT var equals var expr inT expr > { LetApp ($2,$4,$5) $7 } > | letT var equals expr inT expr > { Let ($2,$4) $6 } > | letnT var equals expr inT expr > { LetN ($2,$4) $6 } > > | labelref colon expr { Label $1 $3 } > | simpleexpr { $1 } > simpleexpr > : cons simpleexprs { Cons $1 (reverse $2) } > | simpleexpr0 { $1 } > > simpleexprs > : simpleexprs simpleexpr0 { $2 : $1 } > | { [] } > > simpleexpr0 > : var { Var $1 } > | labelref { LabelRef $1 } > | leftbracket expr rightbracket { $2 } > > patterns > : patterns pattern { $2 : $1 } > | pattern { [ $1 ] } > > pattern : cons binders rightarrow expr semicolon > { ($1, reverse $2, $4) } > > binders : binders var { $2 : $1 } > | { [ ] } > > labelref > : percent var { $2 } > { > happyError :: Int -> a > happyError x = error ("Error at LINE " ++ show x) > } happy-1.20.1.1/examples/glr/0000755000000000000000000000000007346545000013646 5ustar0000000000000000happy-1.20.1.1/examples/glr/Makefile0000644000000000000000000000035407346545000015310 0ustar0000000000000000 all : make loop CMD=run clean : make loop CMD=clean DIRS = expr-eval expr-tree expr-monad \ hidden-leftrec highly-ambiguous packing \ nlp bio-eg loop : for d in ${DIRS}; do (cd $$d && make ${CMD}) || exit 1; done happy-1.20.1.1/examples/glr/Makefile.defs0000644000000000000000000000034507346545000016230 0ustar0000000000000000.SUFFIXES: .y .hs .exe OPT= GHC=ghc -rtsopts -I../common -i../common -fno-warn-tabs ${OPT} # -dshow-passes HAPPY=happy FILTER = --filter FILTER = DECODE = H_OPT = .y.hs : ${HAPPY} -i -l ${DECODE} ${FILTER} ${H_OPT} $*.y happy-1.20.1.1/examples/glr/bio-eg/0000755000000000000000000000000007346545000015010 5ustar0000000000000000happy-1.20.1.1/examples/glr/bio-eg/1-1200.dna0000644000000000000000000000226107346545000016215 0ustar0000000000000000agcttttcattctgactgcaacgggcaatatgtctctgtgtggattaaaaaaagagtgtctgatagcagcttctgaactggttacctgccgtgagtaaattaaaattttattgacttaggtcactaaatactttaaccaatataggcatagcgcacagacagataaaaattacagagtacacaacatccatgaaacgcattagcaccaccattaccaccaccatcaccattaccacaggtaacggtgcgggctgacgcgtacaggaaacacagaaaaaagcccgcacctgacagtgcgggctttttttttcgaccaaaggtaacgaggtaacaaccatgcgagtgttgaagttcggcggtacatcagtggcaaatgcagaacgttttctgcgtgttgccgatattctggaaagcaatgccaggcaggggcaggtggccaccgtcctctctgcccccgccaaaatcaccaaccacctggtggcgatgattgaaaaaaccattagcggccaggatgctttacccaatatcagcgatgccgaacgtatttttgccgaacttttgacgggactcgccgccgcccagccggggttcccgctggcgcaattgaaaactttcgtcgatcaggaatttgcccaaataaaacatgtcctgcatggcattagtttgttggggcagtgcccggatagcatcaacgctgcgctgatttgccgtggcgagaaaatgtcgatcgccattatggccggcgtattagaagcgcgcggtcacaacgttactgttatcgatccggtcgaaaaactgctggcagtggggcattacctcgaatctaccgtcgatattgctgagtccacccgccgtattgcggcaagccgcattccggctgatcacatggtgctgatggcaggtttcaccgccggtaatgaaaaaggcgaactggtggtgcttggacgcaacggttccgactactctgctgcggtgctggctgcctgtttacgcgccgattgttgcgagatttggacggacgttgacggggtctatacctgcgacccgcgtcaggtgcccgatgcgaggttgttgaagtcgatgtcctaccaggaagcgatggagctttcctacttcggcgctaaagttcttcacccccgcaccattacccccatcgcccagttccagatcccttgcctgattaaaaataccggaaatcct happy-1.20.1.1/examples/glr/bio-eg/1-600.dna0000644000000000000000000000113107346545000016133 0ustar0000000000000000agcttttcattctgactgcaacgggcaatatgtctctgtgtggattaaaaaaagagtgtctgatagcagcttctgaactggttacctgccgtgagtaaattaaaattttattgacttaggtcactaaatactttaaccaatataggcatagcgcacagacagataaaaattacagagtacacaacatccatgaaacgcattagcaccaccattaccaccaccatcaccattaccacaggtaacggtgcgggctgacgcgtacaggaaacacagaaaaaagcccgcacctgacagtgcgggctttttttttcgaccaaaggtaacgaggtaacaaccatgcgagtgttgaagttcggcggtacatcagtggcaaatgcagaacgttttctgcgtgttgccgatattctggaaagcaatgccaggcaggggcaggtggccaccgtcctctctgcccccgccaaaatcaccaaccacctggtggcgatgattgaaaaaaccattagcggccaggatgctttacccaatatcagcgatgccgaacgtatttttgccgaacttttgacgggactcgccgccgcccagccggggttcccgctggcg happy-1.20.1.1/examples/glr/bio-eg/Bio.y0000644000000000000000000003201307346545000015712 0ustar0000000000000000{ -- (c) 2004 University of Durham, Julia Fischer -- Portions of the grammar are derived from work by Leung/Mellish/Robertson import Data.Char } %tokentype { Token } %token a { Base_A } c { Base_C } g { Base_G } t { Base_T } %lexer { lexer } { TokenEOF } %% M : Intergenic_noise Match Intergenic_noise {} -- replace NSkip by Intergenic_noise? Intergenic_noise : {} | Intergenic_noise N {} -- Left-assoc, less stack? Match : Promoter Translation {} Promoter :: {Int} : Promoter_consensus {1} | Promoter_hcv_large {2} | Promoter_cart {3} | Promoter_hcv_small {4} -------------------- -- HCV SMALL -------------------- -- regions [data from Leung (hvc_small.gr)] Promoter_hcv_small : N V N7_skip K B K N20_skip R N12_skip {} --mod 3 = 0 | K N B N N D N18_skip H N9_skip V N {} --mod 3 = 0 | t N20_skip N6_skip t N4_skip t N6_skip {} --mod 3 = 0 -------------------- -- CONSENSUS -------------------- -- regions [data from Leung (consensus.gr)] Promoter_consensus : Minus_35 N15_skip Minus_10 {} | Minus_35 N15_skip N1_skip Minus_10 N5_skip {} | Minus_35 N15_skip N2_skip Minus_10 N5_skip {} | Minus_35 N15_skip N3_skip Minus_10 N5_skip {} | Minus_35 N15_skip N4_skip Minus_10 N5_skip {} Minus_35 : t t g a c a {} Minus_10 : t a t a a t {} -------------------- -- HVC LARGE -------------------- -- regions [data from Leung (hvc_large.gr)] Promoter_hcv_large : H N11_skip D Y B N3_skip H N12_skip B N5_skip Y N2_skip W N4_skip {} | N D N3_skip V N1_skip B N12_skip H N2_skip B D N2_skip H N2_skip H B N4_skip W N6_skip H H {} | N H N B N D N6_skip H N4_skip K B N6_skip D B N3_skip B N4_skip V N4_skip H N2_skip D N7_skip {} | N N D N12_skip B D N2_skip V N2_skip H D N2_skip D H B N7_skip B D N5_skip H H N6_skip {} | D N D N12_skip B N5_skip H N13_skip B N H H W N6_skip H Y {} | N N D N B N D N H N3_skip D N4_skip V N2_skip H N D H N6_skip H N3_skip D N6_skip H N2_skip B N3_skip {} | D N8_skip H N1_skip H N1_skip D N4_skip H N3_skip V H N11_skip H N2_skip H N5_skip D N1_skip V N1_skip H {} | H N3_skip B N9_skip H N12_skip H D N4_skip W B N2_skip D D H N1_skip D N5_skip D H {} | V N7_skip V N2_skip D N2_skip D N6_skip B H N11_skip D D N1_skip H N1_skip H H N1_skip B N2_skip {} | D N8_skip B D D N2_skip B N6_skip H N4_skip D N5_skip D N1_skip H D N2_skip D N3_skip D D N6_skip {} | B N13_skip H N1_skip D H V N14_skip B N1_skip V N2_skip D N1_skip D V D N1_skip D N3_skip H {} | H V N4_skip B N1_skip D N6_skip D N4_skip D N4_skip H H N3_skip B N6_skip B N1_skip D N3_skip D N1_skip D N4_skip {} | W N3_skip V N9_skip D N11_skip B N1_skip D H N5_skip D H N1_skip D N1_skip H D N6_skip {} | K N2_skip D N3_skip H N1_skip H N6_skip H N2_skip B N5_skip D D N7_skip V N2_skip D N1_skip H H N7_skip {} | D N11_skip H D D N2_skip D N6_skip D N3_skip H N6_skip V N1_skip D D N2_skip H B N1_skip B N1_skip {} | H N3_skip B N1_skip H N6_skip V N1_skip B N2_skip V N2_skip D N7_skip B N8_skip H N3_skip H D N1_skip H N1_skip H N1_skip {} | B N4_skip B N12_skip H N4_skip V N2_skip H D N2_skip V H N1_skip H N2_skip H N3_skip B N1_skip K N4_skip {} | W D N7_skip B N1_skip D N2_skip D N2_skip W N1_skip D H N2_skip D N12_skip D N5_skip H {} | a N2_skip t N4_skip g N18_skip {} -------------------- -- CART -------------------- -- regions [data from Leung (cart.gr)] Promoter_cart : N N t a N N N N N N N N N N N {} | N N V a N N N t N N N N N N N {} | t N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N t B N N N t N N N N N N N t N N N N N N N {} -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- Translation : Start Mincodon Stop {} | Start Mincodon Codon Stop {} | Start Mincodon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Stop {} | Start Mincodon Mincodon Codon Stop {} | Start Mincodon Mincodon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Stop {} | Start Mincodon Mincodon Mincodon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Mincodon Stop {} --252 Basen Mincodon : Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon {} --42 Basen N0_skip : {} N1_skip : N {} -- match starts one place on N2_skip : N N {} -- match starts two places on N3_skip : N N N {} -- missing an entire codon N4_skip : N N N N {} -- missing 4 bases N5_skip : N N N N N {} -- missing 5 bases N6_skip : N N N N N N {} -- missing 6 bases N7_skip : N N N N N N N {} -- missing 8 bases N8_skip : N N N N N N N N {} -- missing 7 bases N9_skip : N N N N N N N N N {} -- missing 9 bases N10_skip : N N N N N N N N N N {} -- missing 10 bases N11_skip : N10_skip N1_skip {} -- missing 11 bases N12_skip : N10_skip N2_skip {} -- missing 12 bases N13_skip : N10_skip N3_skip {} -- missing 13 bases N14_skip : N10_skip N4_skip {} -- missing 14 bases N15_skip : N10_skip N5_skip {} -- missing 15 bases N16_skip : N10_skip N6_skip {} -- missing 16 bases N17_skip : N10_skip N7_skip {} -- missing 17 bases N18_skip : N10_skip N8_skip {} -- missing 18 bases N19_skip : N10_skip N9_skip {} -- missing 19 bases N20_skip : N10_skip N10_skip {} -- missing 20 bases N30_skip : N10_skip N10_skip N10_skip {} -- missing 30 bases N40_skip : N10_skip N10_skip N10_skip N10_skip {} -- missing 40 bases N50_skip : N10_skip N10_skip N10_skip N10_skip N10_skip {} -- missing 50 bases N60_skip : N10_skip N50_skip {} -- missing 40 bases N70_skip : N10_skip N10_skip N50_skip {} -- missing 50 bases N80_skip : N10_skip N10_skip N10_skip N50_skip {} -- missing 40 bases N90_skip : N10_skip N10_skip N10_skip N10_skip N50_skip{} -- missing 50 bases N100_skip : N50_skip N50_skip {} -- Definitions of base categories according to the -- International Union of Biochemistry (IUB) -- Standard Nucleotide Codes. [Leung_data] N -- any base : a {} | c {} | g {} | t {} Y -- pyrimidin : c {} | t {} R -- purine : a {} | g {} S -- strong bonding bases : g {} | c {} W -- weak bonding bases : a {} | t {} K -- keto bases : g {} | t {} AM -- aMino bases : a {} | c {} B -- not base a : g {} | c {} | t {} D -- not base c : a {} | g {} | t {} H -- not base g : a {} | c {} | t {} V -- not base t : a {} | c {} | g {} Base : a {} | c {} | g {} | t {} -------------------- -- codons Start : a t g {} -- start codon Stop -- stop codons : t a a {} | t a g {} | t g a {} Codon -- any other codon : a a a {} | a a c {} | a a g {} | a a t {} | a c a {} | a c c {} | a c g {} | a c t {} | a g a {} | a g c {} | a g g {} | a g t {} | a t a {} | a t c {} | a t g {} | a t t {} | c a a {} | c a c {} | c a g {} | c a t {} | c c a {} | c c c {} | c c g {} | c c t {} | c g a {} | c g c {} | c g g {} | c g t {} | c t a {} | c t c {} | c t g {} | c t t {} | g a a {} | g a c {} | g a g {} | g a t {} | g c a {} | g c c {} | g c g {} | g c t {} | g g a {} | g g c {} | g g g {} | g g t {} | g t a {} | g t c {} | g t g {} | g t t {} | t a c {} | t a t {} | t c a {} | t c c {} | t c g {} | t c t {} | t g c {} | t g g {} | t g t {} | t t a {} | t t c {} | t t g {} | t t t {} -------------------- --%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% --some aux code { data Token = TokenEOF | Base_A | Base_C | Base_G | Base_T deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer ('\n':cs) = lexer cs lexer ('a':cs) = Base_A : lexer cs lexer ('c':cs) = Base_C : lexer cs lexer ('g':cs) = Base_G : lexer cs lexer ('t':cs) = Base_T : lexer cs } happy-1.20.1.1/examples/glr/bio-eg/Main.lhs0000644000000000000000000000411107346545000016401 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import Bio > import qualified Data.Map as Map > import Control.Monad.State #include "DV_lhs" > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > let f_ = filter_noise $ Map.toList f > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show f_) > --writeFile "full" (unlines $ map show f) > toDV (trim_graph f_ r) > ParseEOF f -> do > let f_ = filter_noise $ Map.toList f > putStrLn $ "Premature end of input:\n" > ++ unlines (map show f_) > toDV f_ > --writeFile "full" (unlines $ map show f) > ParseError ts f -> do > let f_ = filter_noise $ Map.toList f > putStrLn $ "Error: " ++ show ts > toDV f_ > --writeFile "full" (unlines $ map show f) > forest_lookup f i > = fromJust $ Map.lookup i f --- remove intergenic things, to make graph small enough for drawing -- (prefer to do this with filtering in parser...) > filter_noise f > = [ (i, map filter_branch bs) > | (i@(s_i,e_i,l), bs) <- f, not_igs i ] > where > igs = Map.fromList [ (i,False) | i@(_,_,G_Intergenic_noise) <- map fst f ] > not_igs i = Map.findWithDefault True i igs > filter_branch (Branch s ns) = Branch s [ n | n <- ns, not_igs n ] > trim_graph :: NodeMap -> RootNode -> NodeMap > trim_graph f r > = [ (i,n) | (i,n) <- f, Map.findWithDefault False i wanted ] > where > table = Map.fromList f > wanted = snd $ runState (follow r) Map.empty > follow :: ForestId -> State (Map.Map ForestId Bool) () > follow i = do > visited <- get > if Map.findWithDefault False i visited > then return () > else do > case Map.lookup i table of > Nothing > -> error $ "bad node: " ++ show i > Just bs > -> do > modify (\s -> Map.insert i True s) > mapM_ follow $ concatMap b_nodes bs happy-1.20.1.1/examples/glr/bio-eg/Makefile0000644000000000000000000000160107346545000016446 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs PROG=bio-eg # filtering causes this example to fail... FILTER = --filter FILTER = .hi.o : @ dummy ${PROG} : Bio.o Main.lhs ${GHC} -cpp -fglasgow-exts -o ${PROG} --make Main.lhs BioData.hs Bio.hs : Bio.y ${HAPPY} --info --glr --ghc ${FILTER} $< Bio.o : Bio.hs BioData.hi ${GHC} -cpp -fglasgow-exts -O2 -c Bio.hs DATA_FLAGS = -funfolding-use-threshold0 -fno-strictness BioData.hi BioData.o : BioData.hs @echo "Making BioData.hs WITHOUT optimisation (for speed)" ${GHC} -cpp -fglasgow-exts ${DATA_FLAGS} -c $< run : run12 run6 : ${PROG} ./${PROG} +RTS -s -K5M -RTS `cat 1-600.dna` run12 : ${PROG} rm -f out.1200 ./${PROG} +RTS -s -K15M -RTS `cat 1-1200.dna` > out.1200 2>&1 echo Expect NINE matches, got `grep '^[(,0-9]*G_Match' out.1200 | wc -l` clean : rm -rf ${PROG} Bio.info Bio.hs BioData.hs *.o *.hi out.daVinci \ out.1200 out.600 happy-1.20.1.1/examples/glr/bio-eg/README0000644000000000000000000000133407346545000015671 0ustar0000000000000000 A more complex example - looking for patterns in DNA sequences. This example derived from undergraduate project work by Julia Fischer at the University of Durham. Some of the grammar is based on the ones developed by Siu-wai Leung, Chris Mellish, and Dave Robertson at the University of Edinburgh. (Contact Paul Callaghan for details, and see the accompanying paper.) Files 1-600.dna and 1-1200.dna contain 600 (1200) bases from the sequence for E. coli. The first sequence parses in a few minutes, the second takes a bit longer. There are issues about how to efficiently skip over "noise" in the data. This would make the parser faster. Use "make run" to parse the 1-1200.dna sequence - it should take under 15 seconds happy-1.20.1.1/examples/glr/common/0000755000000000000000000000000007346545000015136 5ustar0000000000000000happy-1.20.1.1/examples/glr/common/DV_lhs0000644000000000000000000000351207346545000016241 0ustar0000000000000000 import DaVinciTypes hiding (Edge(..) , Node(..)) import qualified DaVinciTypes (Edge(..) , Node(..)) -- toDV :: NodeMap -> IO () toDV nodes = writeFile "out.daVinci" (show $ map g2n nodes) -- show_gsymbol (HappyTok x) = show x show_gsymbol t = show t g2n (n@(s,e,x), []) = mk_rhombus id (show_gsymbol x ++ show (s,e)) [] where id = show n g2n (n@(s,e,x), [Branch _ bs]) = mk_box id (show_gsymbol x ++ show (s,e)) $ [ DaVinciTypes.R (NodeId $ show j) | j <- bs ] where id = show n g2n (n@(s,e,x), bss) = mk_circle id (show_gsymbol x ++ show (s,e)) $ [ mk_box (id ++ "." ++ show i) (show_gsymbol x ++ show (s,e)) [ DaVinciTypes.R (NodeId $ show j) | j <- js ] | (i,Branch _ js) <- zip [0..] bss ] where id = show n --- mk_box = mk_node box_t mk_circle = mk_node circle_t mk_plain = mk_node text_t mk_rhombus = mk_node rhombus_t mk_node :: Attribute -> String -> String -> [DaVinciTypes.Node] -> DaVinciTypes.Node mk_node a id nm ts = DaVinciTypes.N (NodeId id) (Type "") [a,text nm] $ [ (mk_edge id n) t | (n,t) <- zip [1..] ts ] mk_edge id child_no t@(DaVinciTypes.R (NodeId id2)) = DaVinciTypes.E (EdgeId eId) (Type "") [] t where eId = concat [id,":",id2,"(",show child_no,")"] mk_edge id child_no t@(DaVinciTypes.N (NodeId id2) _ _ _) = DaVinciTypes.E (EdgeId eId) (Type "") [] t where eId = concat [id,":",id2,"(",show child_no,")"] --- nodeStyle = A "_GO" box_t, circle_t, ellipse_t, rhombus_t, text_t, icon_t :: Attribute box_t = nodeStyle "box" circle_t = nodeStyle "circle" ellipse_t = nodeStyle "ellipse" rhombus_t = nodeStyle "rhombus" text_t = nodeStyle "text" icon_t = nodeStyle "icon" text :: String -> Attribute text = A "OBJECT" happy-1.20.1.1/examples/glr/common/DaVinciTypes.hs0000644000000000000000000011426607346545000020046 0ustar0000000000000000----------------------------------------------------------------------------------------- -- Haskell binding for daVinci API -- -- Original version: Sven Panne 1997/99 -- Adapted to daVinci 2.1: Tim Geisler May 1998 -- marked all extensions with '(V2.1 API)' ----------------------------------------------------------------------------------------- -- Some changes to names from daVinci API: -- foo_bar => FooBar -- baz => DVBaz in case of name collision -- foo x and foo => Foo (Maybe x) -- -- Note: There are some exceptions to the above rules (but I can't remember... ;-) module DaVinciTypes( DaVinciCmd(..), GraphCmd(..), MultiCmd(..), MenuCmd(..), FileMenuCmd(..), ViewMenuCmd(..), NavigationMenuCmd(..), AbstractionMenuCmd(..), LayoutMenuCmd(..), AppMenuCmd(..), SetCmd(..), WindowCmd(..), TclCmd(..), SpecialCmd(..), VisualCmd(..), DragAndDropCmd(..), -- (V2.1 API) DaVinciAnswer(..), Node(..), Edge(..), Attribute(..), NodeUpdate(..), EdgeUpdate(..), AttrChange(..), MixedUpdate(..), TypeChange(..), -- (V2.1 API) MenuEntry(..), IconEntry(..), VisualRule(..), -- (V2.1 API) NodeId(..), EdgeId(..), MenuId(..), MenuLabel(..), MenuMne(..), MenuAcc(..), IconId(..), Type(..), Filename(..), ContextId(..), WindowId(..), -- (V2.1 API) Orient(..), Direction(..), Btype(..), MenuMod(..) ) where --- API commands ---------------------------------------------------------- data DaVinciCmd = -- Commands of the API (top-level). Graph GraphCmd -- Graph category | Multi MultiCmd -- Multi category | Menu MenuCmd -- Menu category | AppMenu AppMenuCmd -- AppMenu category | DVSet SetCmd -- Set category | Window WindowCmd -- Window category | Tcl TclCmd -- Tcl category | Special SpecialCmd -- Special category | DVNothing -- No operation, for syncronization. | Visual VisualCmd -- Visual category (V2.1 API) | DragAndDrop DragAndDropCmd -- Drag and Drop category (V2.1 API) deriving Eq data GraphCmd = -- Send and update graphs New [Node] -- Send new graph | NewPlaced [Node] -- Dito, better layout | Update [NodeUpdate] [EdgeUpdate] -- Send graph updates | ChangeAttr [AttrChange] -- Change attributes | UpdateAndChangeAttr [NodeUpdate] [EdgeUpdate] [AttrChange] -- Combination of both | UpdateMixed [MixedUpdate] -- Send mixed graph updates (V2.1 API) | UpdateAndChangeAttrMixed [MixedUpdate] [AttrChange] -- Combination of both (V2.1 API) | ChangeType [TypeChange] -- Change types (V2.1 API) deriving Eq data MultiCmd = -- For multi-graph mode NewContext -- Open graph context | OpenContext ContextId -- Dito, but ID is given | SetContext ContextId -- Switch to context | SetContextWindow ContextId WindowId -- switch to context and window (V2.1 API) deriving Eq data MenuCmd = -- Call functions of menu File FileMenuCmd -- File menu category | View ViewMenuCmd -- View menu category | Navigation NavigationMenuCmd -- Navigation menu category | Abstraction AbstractionMenuCmd -- Abstraction menu category | Layout LayoutMenuCmd -- Layout menu category deriving Eq data FileMenuCmd = -- File menu functions ClearGraph -- Clear graph. | OpenGraph Filename -- Load graph from file | OpenGraphPlaced Filename -- Dito, better layout | OpenStatus Filename -- Load status from file | SaveGraph Filename -- Save graph as term | SaveStatus Filename -- Save graph as status | Print (Maybe Filename) -- Save as PostScript | Close -- Close graph window | Exit -- Exit daVinci deriving Eq data ViewMenuCmd = -- View menu functions OpenNewView -- Open additional view | OpenSurveyView -- Open survey view | FullScale -- Set scale to 100% | FitScaleToWindow -- Set scale to fit | Scale (Maybe Int) -- Set scale to Int | GraphInfo -- Open Graph Info dialog | DaVinciInfo -- Open daVinci Info dialog deriving Eq data NavigationMenuCmd = -- Navigation menu functions SelectParents [NodeId] -- Select parents of nodes | SelectSiblings [NodeId] -- Select siblings of nodes | SelectChilds [NodeId] -- Select childs of nodes | SelectChildren [NodeId] -- Select childs of nodes (V2.1 API) | Navigator (Maybe (NodeId,Direction,Bool)) -- Navigate in graph | Find (Maybe (String,Bool,Bool)) -- Find a node deriving Eq data AbstractionMenuCmd = -- Abstraction menu functions HideSubgraph [NodeId] -- Hide subgraphs of nodes | ShowSubgraph [NodeId] -- Show subgraphs of nodes | RestoreAllSubgraphs -- Show all hidden subgr | HideEdges [NodeId] -- Hide edges of nodes | ShowEdges [NodeId] -- Show edges of nodes | RestoreAllEdges -- Show all hidden edges deriving Eq data LayoutMenuCmd = -- Layout menu functions ImproveAll -- Start layout algorithm | ImproveVisible -- Dito, only visible nodes | CompactAll -- Compact graph layout | Orientation Orient -- Switch orientation deriving Eq data AppMenuCmd = -- Create menus/icons CreateMenus [MenuEntry] -- Add menus in Edit | CreateIcons [IconEntry] -- Add icons in icon-bar | ActivateMenus [MenuId] -- Enable menus | ActivateIcons [IconId] -- Enable icons | ControlFileEvents -- Get events of File menu deriving Eq data SetCmd = -- Set options LayoutAccuracy Int -- Layout algorithm params | KeepNodesAtLevels Bool -- Keep nodes at levels | FontSize Int -- Node font size | GapWidth Int -- Min. node distance | GapHeight Int -- Min. level distance | MultiEdgeGap Int -- Distance for multi-edges | SelfEdgeRadius Int -- Distance for self-edges | ScrollingOnSelection Bool -- Auto focusing node | AnimationSpeed Int -- Speed of animation | NoCache Bool -- Control pixmap caching. Details | RulesFirst Bool -- Should rules overlap attributes? (V2.1 API) deriving Eq data WindowCmd = -- Control windows Title String -- Set window title | ShowMessage String -- Left footer message | ShowStatus String -- Right footer message | Position Int Int -- Window origin x/y | Size Int Int -- Window width/height | Raise -- Raise window | Iconify -- Iconify window | Deiconify -- Deiconify window | Activate -- Enable interaction | Deactivate -- Disable interaction | FileBrowser Bool String String String String [Btype] Bool -- Show file browser deriving Eq data TclCmd = -- Tcl/Tk interface DVEval String -- Eval Tcl/Tk script | EvalFile Filename -- Dito, from file deriving Eq data SpecialCmd = -- Special commands SelectNodes [NodeId] -- Select specified nodes | SelectEdge EdgeId -- Select specified edge | FocusNode NodeId -- Scroll to specified node | FocusNodeAnimated NodeId -- Dito, with animation | ShowUrl String -- Display HTML-page deriving Eq data VisualCmd = -- Visual commands (V2.1 API) NewRules [VisualRule] -- Specify new rules | AddRules [VisualRule] -- Add rules or exchange existing ones deriving Eq data DragAndDropCmd = -- Drag and Drop commands (V2.1 API) DraggingOn -- Switch dragging on | DragAndDropOn -- Switch drag&drop on | DraggingOff -- Switch drag* off | NewNodeAtCoord NodeUpdate -- Insert at coordinate | NewEdgeAndNodeAtCoord NodeUpdate EdgeUpdate -- Dito, plus edge where node is the child deriving Eq --- API Answers ----------------------------------------------------------- data DaVinciAnswer = -- Answers from the API Ok -- Positive confirmer | CommunicationError String -- Negative confirmer | NodeSelectionsLabels [NodeId] -- Labels of sel. nodes | NodeDoubleClick -- Sel. node double-clicked | EdgeSelectionLabel EdgeId -- Label of sel. edge | EdgeSelectionLabels NodeId NodeId -- Dito, parent/child | EdgeDoubleClick -- Sel. edge double-clicked | MenuSelection MenuId -- ID of selected menu | IconSelection IconId -- ID of selected icon | Context ContextId -- Other context (graph) | TclAnswer String -- Answer from Tcl script | BrowserAnswer String String -- File browser result | Disconnect -- Termination request | Closed -- Context (graph) closed | Quit -- daVinci terminated | PopupSelectionNode NodeId MenuId -- Pop-up menu selected. (V2.1 API) | PopupSelectionEdge EdgeId MenuId -- Pop-up menu selected (V2.1 API) | CreateNode -- Dragging answer (V2.1 API) | CreateNodeAndEdge NodeId -- Parent ID of new edge (V2.1 API) | CreateEdge NodeId NodeId -- Node IDs of new edge (V2.1 API) | DropNode ContextId WindowId NodeId ContextId WindowId NodeId -- Node A dropped on B (V2.1 API) | ContextWindow ContextId WindowId -- Context ID + window ID (V2.1 API) | OpenWindow -- New window opened (V2.1 API) | CloseWindow WindowId -- Window closed (V2.1 API) deriving Eq --- Term Representation for Graphs ---------------------------------------- data Node = N NodeId Type [Attribute] [Edge] -- Node with ID/type/attr/childs | R NodeId -- Reference to a node deriving Eq data Edge = E EdgeId Type [Attribute] Node -- Edges with ID/type/attr/child deriving Eq data Attribute = A String String -- regular node/edge attributes (key/val) | M [MenuEntry] -- pop-up menu for node/edge (V2.1 API) deriving Eq --- Graph Updates --------------------------------------------------------- data NodeUpdate = -- Delete or remove nodes DeleteNode NodeId | NewNode NodeId Type [Attribute] deriving Eq data EdgeUpdate = -- Delete or remove edges DeleteEdge EdgeId | NewEdge EdgeId Type [Attribute] NodeId NodeId | NewEdgeBehind EdgeId EdgeId Type [Attribute] NodeId NodeId deriving Eq data MixedUpdate = -- Node or Edge update (V2.1) NU NodeUpdate -- wrapper needed in Haskell | EU EdgeUpdate -- wrapper needed in Haskell deriving Eq data AttrChange = -- Change attributes Node NodeId [Attribute] | Edge EdgeId [Attribute] deriving Eq data TypeChange = -- Change types (V2.1 API) NodeType NodeId Type -- Label, type | EdgeType EdgeId Type -- Label, type deriving Eq --- Application Menus and Icons ------------------------------------------- data MenuEntry = -- Create Menus MenuEntry MenuId MenuLabel | MenuEntryMne MenuId MenuLabel MenuMne MenuMod MenuAcc | SubmenuEntry MenuId MenuLabel [MenuEntry] | SubmenuEntryMne MenuId MenuLabel [MenuEntry] MenuMne | BlankMenuEntry | MenuEntryDisabled MenuId MenuLabel -- (V2.1 API) | SubmenuEntryDisabled MenuId MenuLabel [MenuEntry] -- (V2.1 API) deriving Eq data IconEntry = -- Create Icons IconEntry IconId Filename String | BlankIconEntry deriving Eq --- Visualization Rules (V2.1 API) --------------------------------------- data VisualRule = -- (V2.1 API) NR Type [Attribute] -- Rules for all nodes of given type | ER Type [Attribute] -- Rules for all edges of given type deriving Eq --- String Sorts ---------------------------------------------------------- newtype NodeId = NodeId String deriving Eq -- Unique node ID newtype EdgeId = EdgeId String deriving Eq -- Unique edge ID newtype MenuId = MenuId String deriving Eq -- Unique menu ID newtype MenuLabel = MenuLabel String deriving Eq -- Text of menu entry newtype MenuMne = MenuMne String deriving Eq -- Motif mnemonic char newtype MenuAcc = MenuAcc String deriving Eq -- Motif accelerator key newtype IconId = IconId String deriving Eq -- Unique icon ID newtype Type = Type String deriving Eq -- Arbitrary type newtype Filename = Filename String deriving Eq -- Valid Filename newtype ContextId = ContextId String deriving Eq -- Context ID newtype WindowId = WindowId String deriving Eq -- Window ID (V2.1 API) --- Basic Sorts ----------------------------------------------------------- data Orient = TopDown | BottomUp | LeftRight | RightLeft deriving Eq data Direction = Up | Down | DVLeft | DVRight deriving Eq data Btype = Bt String String String deriving Eq -- Text, pattern and title postfix data MenuMod = Alternate | Shift | Control | Meta | None deriving Eq -- Motif modifier key --------------------------------------------------------------------------- -- Show instances for daVinci API commands -- -- Everything would be *much* easier if daVinci allowed spaces in commands... instance Show DaVinciCmd where showsPrec _ (Graph graphCmd) = showFunc1 "graph" graphCmd showsPrec _ (Multi multiCmd) = showFunc1 "multi" multiCmd showsPrec _ (Menu menuCmd) = showFunc1 "menu" menuCmd showsPrec _ (AppMenu appMenuCmd) = showFunc1 "app_menu" appMenuCmd showsPrec _ (DVSet setCmd) = showFunc1 "set" setCmd showsPrec _ (Window windowCmd) = showFunc1 "window" windowCmd showsPrec _ (Tcl tclCmd) = showFunc1 "tcl" tclCmd showsPrec _ (Special specialCmd) = showFunc1 "special" specialCmd showsPrec _ DVNothing = showString "nothing" showsPrec _ (Visual visualCmd) = showFunc1 "visual" visualCmd showsPrec _ (DragAndDrop dragAndDropCmd) = showFunc1 "visual" dragAndDropCmd instance Show GraphCmd where showsPrec _ (New nodes) = showFunc1 "new" nodes showsPrec _ (NewPlaced nodes) = showFunc1 "new_placed" nodes showsPrec _ (Update nUpds eUpds) = showFunc2 "update" nUpds eUpds showsPrec _ (ChangeAttr aChs) = showFunc1 "change_attr" aChs showsPrec _ (UpdateAndChangeAttr nUpds eUpds aChs) = showFunc3 "update_and_change_attr" nUpds eUpds aChs showsPrec _ (UpdateMixed mUpds) = showFunc1 "update" mUpds showsPrec _ (UpdateAndChangeAttrMixed mUpds aChs)= showFunc2 "update_and_change_attr" mUpds aChs showsPrec _ (ChangeType tChs) = showFunc1 "change_type" tChs instance Show MultiCmd where showsPrec _ NewContext = showString "new_context" showsPrec _ (OpenContext contextId) = showFunc1 "open_context" contextId showsPrec _ (SetContext contextId) = showFunc1 "set_context" contextId showsPrec _ (SetContextWindow contextId windowId)= showFunc2 "set_context" contextId windowId instance Show MenuCmd where showsPrec _ (File fCmd) = showFunc1 "file" fCmd showsPrec _ (View vCmd) = showFunc1 "view" vCmd showsPrec _ (Navigation nCmd) = showFunc1 "navigation" nCmd showsPrec _ (Abstraction aCmd) = showFunc1 "abstraction" aCmd showsPrec _ (Layout lCmd) = showFunc1 "layout" lCmd instance Show FileMenuCmd where showsPrec _ ClearGraph = showString "new" showsPrec _ (OpenGraph fname) = showFunc1 "open_graph" fname showsPrec _ (OpenGraphPlaced fname) = showFunc1 "open_graph_placed" fname showsPrec _ (OpenStatus fname) = showFunc1 "open_status" fname showsPrec _ (SaveGraph fname) = showFunc1 "save_graph" fname showsPrec _ (SaveStatus fname) = showFunc1 "save_status" fname showsPrec _ (Print Nothing) = showString "print" showsPrec _ (Print (Just fname)) = showFunc1 "print" fname showsPrec _ Close = showString "close" showsPrec _ Exit = showString "exit" instance Show ViewMenuCmd where showsPrec _ OpenNewView = showString "open_new_view" showsPrec _ OpenSurveyView = showString "open_survey_view" showsPrec _ FullScale = showString "full_scale" showsPrec _ FitScaleToWindow = showString "fit_scale_to_window" showsPrec _ (Scale Nothing) = showString "scale" showsPrec _ (Scale (Just scale)) = showFunc1 "scale" scale showsPrec _ GraphInfo = showString "graph_info" showsPrec _ DaVinciInfo = showString "daVinci_info" instance Show NavigationMenuCmd where showsPrec _ (SelectParents nodeIds) = showFunc1 "select_parents" nodeIds showsPrec _ (SelectSiblings nodeIds) = showFunc1 "select_siblings" nodeIds -- TODO: change 'childs' to 'children'. But then it's no longer V2.0.x compatible ... showsPrec _ (SelectChilds nodeIds) = showFunc1 "select_childs" nodeIds showsPrec _ (SelectChildren nodeIds) = showFunc1 "select_childs" nodeIds showsPrec _ (Navigator Nothing) = showString "navigator" showsPrec _ (Navigator (Just (nodeId,dir,flag))) = showFunc3 "navigator" nodeId dir flag showsPrec _ (Find Nothing) = showString "find" showsPrec _ (Find (Just (txt,cas,exact))) = showFunc3 "find" txt cas exact instance Show AbstractionMenuCmd where showsPrec _ (HideSubgraph nodeIds) = showFunc1 "hide_subgraph" nodeIds showsPrec _ (ShowSubgraph nodeIds) = showFunc1 "show_subgraph" nodeIds showsPrec _ RestoreAllSubgraphs = showString "restore_all_subgraphs" showsPrec _ (HideEdges nodeIds) = showFunc1 "hide_edges" nodeIds showsPrec _ (ShowEdges nodeIds) = showFunc1 "show_edges" nodeIds showsPrec _ RestoreAllEdges = showString "restore_all_edges" instance Show LayoutMenuCmd where showsPrec _ ImproveAll = showString "improve_all" showsPrec _ ImproveVisible = showString "improve_visible" showsPrec _ CompactAll = showString "compact_all" showsPrec _ (Orientation orient) = showFunc1 "orientation" orient instance Show AppMenuCmd where showsPrec _ (CreateMenus menuEntries) = showFunc1 "create_menus" menuEntries showsPrec _ (CreateIcons iconEntries) = showFunc1 "create_icons" iconEntries showsPrec _ (ActivateMenus menuIds) = showFunc1 "activate_menus" menuIds showsPrec _ (ActivateIcons iconIds) = showFunc1 "activate_icons" iconIds showsPrec _ ControlFileEvents = showString "control_file_events" instance Show SetCmd where showsPrec _ (LayoutAccuracy x) = showFunc1 "layout_accuracy" x showsPrec _ (KeepNodesAtLevels x) = showBoolFunc "keep_nodes_at_levels" x showsPrec _ (FontSize x) = showFunc1 "font_size" x showsPrec _ (GapWidth x) = showFunc1 "gap_width" x showsPrec _ (GapHeight x) = showFunc1 "gap_height" x showsPrec _ (MultiEdgeGap x) = showFunc1 "multi_edge_gap" x showsPrec _ (SelfEdgeRadius x) = showFunc1 "self_edge_radius" x showsPrec _ (ScrollingOnSelection x) = showBoolFunc "scrolling_on_selection" x showsPrec _ (AnimationSpeed x) = showFunc1 "animation_speed" x showsPrec _ (NoCache x) = showBoolFunc "no_cache" x showsPrec _ (RulesFirst x) = showBoolFunc "rules_first" x instance Show WindowCmd where showsPrec _ (Title str) = showFunc1 "title" str showsPrec _ (ShowMessage str) = showFunc1 "show_message" str showsPrec _ (ShowStatus str) = showFunc1 "show_status" str showsPrec _ (Position x y) = showFunc2 "position" x y showsPrec _ (Size w h) = showFunc2 "size" w h showsPrec _ Raise = showString "raise" showsPrec _ Iconify = showString "iconify" showsPrec _ Deiconify = showString "deiconify" showsPrec _ Activate = showString "activate" showsPrec _ Deactivate = showString "deactivate" showsPrec _ (FileBrowser open title btn dir file tps hid) = showFunc7 "file_browser" open title btn dir file tps hid instance Show TclCmd where showsPrec _ (DVEval str) = showFunc1 "eval" str showsPrec _ (EvalFile fname) = showFunc1 "eval_file" fname instance Show SpecialCmd where showsPrec _ (SelectNodes nodes) = showFunc1 "select_nodes" nodes showsPrec _ (SelectEdge edges) = showFunc1 "select_edges" edges showsPrec _ (FocusNode nodeIds) = showFunc1 "focus_node" nodeIds showsPrec _ (FocusNodeAnimated nodeIds) = showFunc1 "focus_node_animated" nodeIds showsPrec _ (ShowUrl url) = showFunc1 "show_url" url instance Show VisualCmd where showsPrec _ (NewRules visualRules) = showFunc1 "new_rules" visualRules showsPrec _ (AddRules visualRules) = showFunc1 "add_rules" visualRules instance Show DragAndDropCmd where showsPrec _ DraggingOn = showString "dragging_on" showsPrec _ DragAndDropOn = showString "drag_and_drop_on" showsPrec _ DraggingOff = showString "dragging_off" showsPrec _ (NewNodeAtCoord nUpd) = showFunc1 "new_node_at_coord" nUpd showsPrec _ (NewEdgeAndNodeAtCoord nUpd eUpd) = showFunc2 "new_edge_and_node_at_coord" nUpd eUpd --------------------------------------------------------------------------- instance Show DaVinciAnswer where showsPrec _ Ok = showString "ok" showsPrec _ (CommunicationError msg) = showFunc1 "communication_error" msg showsPrec _ (NodeSelectionsLabels nodeIds) = showFunc1 "node_selections_labels" nodeIds showsPrec _ NodeDoubleClick = showString "node_double_click" showsPrec _ (EdgeSelectionLabel edgeId) = showFunc1 "edge_selection_label" edgeId showsPrec _ (EdgeSelectionLabels parent child) = showFunc2 "edge_selection_labels" parent child showsPrec _ EdgeDoubleClick = showString "edge_double_click" showsPrec _ (MenuSelection menuId) = showFunc1 "menu_selection" menuId showsPrec _ (IconSelection iconId) = showFunc1 "icon_selection" iconId showsPrec _ (Context contextId) = showFunc1 "context" contextId showsPrec _ (TclAnswer retVal) = showFunc1 "tcl_answer" retVal showsPrec _ (BrowserAnswer file typ) = showFunc2 "browser_answer" file typ showsPrec _ Disconnect = showString "disconnect" showsPrec _ Closed = showString "closed" showsPrec _ Quit = showString "quit" showsPrec _ (PopupSelectionNode nId mId) = showFunc2 "popup_selection_node" nId mId showsPrec _ (PopupSelectionEdge eId mId) = showFunc2 "popup_selection_edge" eId mId showsPrec _ CreateNode = showString "create_node" showsPrec _ (CreateNodeAndEdge nId) = showFunc1 "create_node_and_edge" nId showsPrec _ (CreateEdge nId1 nId2) = showFunc2 "create_edge" nId1 nId2 showsPrec _ (DropNode cId1 wId1 nId1 cId2 wId2 nId2) = showFunc6 "drop_node" cId1 wId1 nId1 cId2 wId2 nId2 showsPrec _ (ContextWindow cId wId) = showFunc2 "context_window" cId wId showsPrec _ OpenWindow = showString "open_window" showsPrec _ (CloseWindow wId) = showFunc1 "close_window" wId instance Read DaVinciAnswer where readsPrec _ r = [ (Ok, s) | ("ok", s) <- lexR ] ++ [ (CommunicationError m, t) | ("communication_error", s) <- lexR , ([m], t) <- readArgs s ] ++ [ (NodeSelectionsLabels (map NodeId n), t) | ("node_selections_labels", s) <- lexR , (n, t) <- readStrs s ] ++ [ (NodeDoubleClick, s) | ("node_double_click", s) <- lexR ] ++ [ (EdgeSelectionLabel (EdgeId e), t) | ("edge_selection_label", s) <- lexR , ([e], t) <- readArgs s ] ++ [ (EdgeSelectionLabels (NodeId p) (NodeId c), t) | ("edge_selection_labels", s) <- lexR , ([p,c], t) <- readArgs s ] ++ [ (EdgeDoubleClick, s) | ("edge_double_click", s) <- lexR ] ++ [ (MenuSelection (MenuId m), t) | ("menu_selection", s) <- lexR , ([m], t) <- readArgs s ] ++ [ (IconSelection (IconId i), t) | ("icon_selection", s) <- lexR , ([i], t) <- readArgs s ] ++ [ (Context (ContextId c), t) | ("context", s) <- lexR , ([c], t) <- readArgs s ] ++ [ (TclAnswer a, t) | ("tcl_answer", s) <- lexR , ([a], t) <- readArgs s ] ++ [ (BrowserAnswer f y, t) | ("browser_answer", s) <- lexR , ([f,y], t) <- readArgs s ] ++ [ (Disconnect, s) | ("disconnect", s) <- lexR ] ++ [ (Closed, s) | ("closed", s) <- lexR ] ++ [ (Quit, s) | ("quit", s) <- lexR ] ++ [ (PopupSelectionNode (NodeId n) (MenuId m), t) | ("popup_selection_node", s) <- lexR , ([n,m], t) <- readArgs s ] ++ [ (PopupSelectionEdge (EdgeId e) (MenuId m), t) | ("popup_selection_edge", s) <- lexR , ([e,m], t) <- readArgs s ] ++ [ (CreateNode, s) | ("create_node", s) <- lexR ] ++ [ (CreateNodeAndEdge (NodeId n), t) | ("create_node_and_edge", s) <- lexR , ([n], t) <- readArgs s ] ++ [ (CreateEdge (NodeId n1) (NodeId n2), t) | ("create_edge", s) <- lexR , ([n1, n2], t) <- readArgs s ] ++ [ (DropNode (ContextId c1) (WindowId w1) (NodeId n1) (ContextId c2) (WindowId w2) (NodeId n2), t) | ("drop_node", s) <- lexR , ([c1,w1,n1,c2,w2,n2], t) <- readArgs s ] ++ [ (ContextWindow (ContextId c) (WindowId w), t)| ("context_window", s) <- lexR , ([c,w], t) <- readArgs s ] ++ [ (OpenWindow, s) | ("open_window", s) <- lexR ] ++ [ (CloseWindow (WindowId w), t) | ("close_window", s) <- lexR , ([w], t) <- readArgs s ] where lexR = lex r readArgs :: ReadS [String] readArgs s = [ (x:xs, v) | ("(", t) <- lex s, (x, u) <- reads t, (xs, v) <- readArgs2 u ] readArgs2 :: ReadS [String] readArgs2 s = [ ([], t) | (")",t) <- lex s ] ++ [ (x:xs, v) | (",",t) <- lex s, (x, u) <- reads t, (xs, v) <- readArgs2 u ] readStrs :: ReadS [String] readStrs = reads --------------------------------------------------------------------------- instance Show Node where showsPrec _ (N nodeId typ attrs edges) = showLabeled nodeId (showFunc3 "n" typ attrs edges) showsPrec _ (R nodeId) = showFunc1 "r" nodeId showList = showLst instance Show Edge where showsPrec _ (E edgeId typ attrs node) = showLabeled edgeId (showFunc3 "e" typ attrs node) showList = showLst instance Show Attribute where showsPrec _ (A key value) = showFunc2 "a" key value showsPrec _ (M menuEntries) = showFunc1 "m" menuEntries showList = showLst instance Show NodeUpdate where showsPrec _ (DeleteNode nodeId) = showFunc1 "delete_node" nodeId showsPrec _ (NewNode nodeId typ attrs) = showFunc3 "new_node" nodeId typ attrs showList = showLst instance Show EdgeUpdate where showsPrec _ (DeleteEdge edgeId) = showFunc1 "delete_edge" edgeId showsPrec _ (NewEdge edgeId typ attrs nodeId1 nodeId2) = showFunc5 "new_edge" edgeId typ attrs nodeId1 nodeId2 showsPrec _ (NewEdgeBehind edgeId1 edgeId2 typ attrs nodeId1 nodeId2) = showFunc6 "new_edge_behind" edgeId1 edgeId2 typ attrs nodeId1 nodeId2 showList = showLst instance Show MixedUpdate where showsPrec _ (NU nUpd) = shows nUpd showsPrec _ (EU eUpd) = shows eUpd showList = showLst instance Show AttrChange where showsPrec _ (Node nodeId attrs) = showFunc2 "node" nodeId attrs showsPrec _ (Edge edgeId attrs) = showFunc2 "edge" edgeId attrs showList = showLst instance Show TypeChange where showsPrec _ (NodeType nodeId typ) = showFunc2 "node" nodeId typ showsPrec _ (EdgeType edgeId typ) = showFunc2 "edge" edgeId typ showList = showLst --------------------------------------------------------------------------- instance Show MenuEntry where showsPrec _ (MenuEntry menuId menuLabel) = showFunc2 "menu_entry" menuId menuLabel showsPrec _ (MenuEntryMne menuId menuLabel menuMne menuMod menuAcc) = showFunc5 "menu_entry_mne" menuId menuLabel menuMne menuMod menuAcc showsPrec _ (SubmenuEntry menuId menuLabel menuEntries) = showFunc3 "submenu_entry" menuId menuLabel menuEntries showsPrec _ (SubmenuEntryMne menuId menuLabel menuEntries menuMne) = showFunc4 "submenu_entry_mne" menuId menuLabel menuEntries menuMne showsPrec _ BlankMenuEntry = showString "blank" showsPrec _ (MenuEntryDisabled menuId menuLabel) = showFunc2 "menu_entry_disabled" menuId menuLabel showsPrec _ (SubmenuEntryDisabled menuId menuLabel menuEntries) = showFunc3 "submenu_entry_disabled" menuId menuLabel menuEntries instance Show IconEntry where showsPrec _ (IconEntry iconId filename descr) = showFunc3 "icon_entry" iconId filename descr showsPrec _ BlankIconEntry = showString "blank" --------------------------------------------------------------------------- instance Show VisualRule where showsPrec _ (NR typ attrs) = showFunc2 "nr" typ attrs showsPrec _ (ER typ attrs) = showFunc2 "er" typ attrs showList = showLst --------------------------------------------------------------------------- instance Show NodeId where showsPrec _ (NodeId s) = shows s showList = showLst instance Show EdgeId where showsPrec _ (EdgeId s) = shows s showList = showLst instance Show MenuId where showsPrec _ (MenuId s) = shows s showList = showLst instance Show MenuLabel where showsPrec _ (MenuLabel s) = shows s showList = showLst instance Show MenuMne where showsPrec _ (MenuMne s) = shows s showList = showLst instance Show MenuAcc where showsPrec _ (MenuAcc s) = shows s showList = showLst instance Show IconId where showsPrec _ (IconId s) = shows s showList = showLst instance Show Type where showsPrec _ (Type s) = shows s showList = showLst instance Show Filename where showsPrec _ (Filename s) = shows s showList = showLst instance Show ContextId where showsPrec _ (ContextId s) = shows s showList = showLst instance Show WindowId where showsPrec _ (WindowId s) = shows s showList = showLst --------------------------------------------------------------------------- instance Show Orient where showsPrec _ TopDown = showString "top_down" showsPrec _ BottomUp = showString "bottom_up" showsPrec _ LeftRight = showString "left_right" showsPrec _ RightLeft = showString "right_left" instance Show Direction where showsPrec _ Up = showString "up" showsPrec _ Down = showString "down" showsPrec _ DVLeft = showString "left" showsPrec _ DVRight = showString "right" instance Show Btype where showsPrec _ (Bt txt pat post) = showFunc3 "bt" txt pat post instance Show MenuMod where showsPrec _ Alternate = showString "alt" showsPrec _ Shift = showString "shift" showsPrec _ Control = showString "control" showsPrec _ Meta = showString "meta" showsPrec _ None = showString "none" --------------------------------------------------------------------------- showFunc1 :: Show a => String -> a -> ShowS showFunc1 funcName arg1 = showString funcName . showParen True (shows arg1) showFunc2 :: (Show a,Show b) => String -> a -> b -> ShowS showFunc2 funcName arg1 arg2 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2) showFunc3 :: (Show a,Show b,Show c) => String -> a -> b -> c -> ShowS showFunc3 funcName arg1 arg2 arg3 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3) showFunc4 :: (Show a,Show b,Show c,Show d) => String -> a -> b -> c -> d -> ShowS showFunc4 funcName arg1 arg2 arg3 arg4 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3 . showChar ',' . shows arg4) showFunc5 :: (Show a,Show b,Show c,Show d,Show e) => String -> a -> b -> c -> d -> e -> ShowS showFunc5 funcName arg1 arg2 arg3 arg4 arg5 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3 . showChar ',' . shows arg4 . showChar ',' . shows arg5) showFunc6 :: (Show a,Show b,Show c,Show d,Show e,Show f) => String -> a -> b -> c -> d -> e -> f -> ShowS showFunc6 funcName arg1 arg2 arg3 arg4 arg5 arg6 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3 . showChar ',' . shows arg4 . showChar ',' . shows arg5 . showChar ',' . shows arg6) showFunc7 :: (Show a,Show b,Show c,Show d,Show e,Show f,Show g) => String -> a -> b -> c -> d -> e -> f -> g -> ShowS showFunc7 funcName arg1 arg2 arg3 arg4 arg5 arg6 arg7 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3 . showChar ',' . shows arg4 . showChar ',' . shows arg5 . showChar ',' . shows arg6 . showChar ',' . shows arg7) showLabeled :: Show a => a -> ShowS -> ShowS showLabeled iD arg = showChar 'l' . showParen True (shows iD . showChar ',' . arg) showLst :: Show a => [a] -> ShowS showLst [] = showString "[]" showLst (x:xs) = showChar '[' . shows x . showl xs where showl [] = showChar ']' showl (y:ys) = showChar ',' . shows y . showl ys showBoolFunc :: String -> Bool -> ShowS showBoolFunc funcName flag = showString funcName . showParen True (showString (if flag then "true" else "false")) happy-1.20.1.1/examples/glr/expr-eval/0000755000000000000000000000000007346545000015551 5ustar0000000000000000happy-1.20.1.1/examples/glr/expr-eval/Expr.y0000644000000000000000000000121707346545000016662 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token '*' { Sym '*' } '+' { Sym '+' } '-' { Sym '-' } '(' { Sym '(' } ')' { Sym ')' } i { AnInt $$ } %% E :: {Int} : E '+' E { $1 + $3 } | E '*' E { $1 * $3 } | E '-' E { $1 - $3 } | '(' E ')' { $2 } | i { $1 } { data Token = TokenEOF | Sym Char | AnInt Int deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-1.20.1.1/examples/glr/expr-eval/Hugs.lhs0000644000000000000000000000116707346545000017174 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > putStrLn $ show (decode (forest_lookup f) r ::[Int]) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-1.20.1.1/examples/glr/expr-eval/Main.lhs0000644000000000000000000000170607346545000017151 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > (s:o) <- getArgs > let x = concat o > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ (if 'f' `elem` x then unlines (map show $ Map.toList f) else "") > ++ (if 'r' `elem` x then unlines (map show (decode (forest_lookup f) r ::[Int])) else "") > if 'g' `elem` x then toDV (Map.toList f) else return () > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup i f happy-1.20.1.1/examples/glr/expr-eval/Makefile0000644000000000000000000000103507346545000017210 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs OPT = -O DECODE = --decode expr : Expr.hs Main.lhs # might want to run happy with --ghc ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "1+2*4-3" runn : expr ./expr +RTS -s -RTS `perl -e 'print join ("+", (1 .. ${NUM}));'` | tee out-${NUM} cat expr.stat >> out-${NUM} eof : expr echo testing premature eof ./expr "1+2*" err : expr echo testing syntax error ./expr "1+2*2++3" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-1.20.1.1/examples/glr/expr-eval/README0000644000000000000000000000042307346545000016430 0ustar0000000000000000 Example of arithmetic expression parsing, with decoding of semantic values (ie it gives a list of possible results of computation). "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () happy-1.20.1.1/examples/glr/expr-monad/0000755000000000000000000000000007346545000015720 5ustar0000000000000000happy-1.20.1.1/examples/glr/expr-monad/Expr.y0000644000000000000000000000136407346545000017034 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %monad { IO } { (>>=) } { return } %lexer { lexer } { TokenEOF } %token '*' { Sym '*' } '+' { Sym '+' } '-' { Sym '-' } '(' { Sym '(' } ')' { Sym ')' } i { AnInt $$ } %% E :: {Int} : E '+' E {% {-print ($1,$3) >>-} if odd $3 then fail "odd num" else return ($1 + $3) } | E '*' E { $1 * $3 } | E '-' E { $1 - $3 } | '(' E ')' { $2 } | i { $1 } { data Token = TokenEOF | Sym Char | AnInt Int deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-1.20.1.1/examples/glr/expr-monad/Hugs.lhs0000644000000000000000000000125707346545000017343 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > let ms = decode (forest_lookup f) r ::[IO Int] > mapM_ (\ma -> catch ma (\_ -> return 0) >>= print) ms > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-1.20.1.1/examples/glr/expr-monad/Main.lhs0000644000000000000000000000157607346545000017325 0ustar0000000000000000> module Main where > import System.IO.Error(catchIOError) > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > let ms = decode (forest_lookup f) r ::[IO Int] > mapM_ (\ma -> catchIOError ma (\_ -> return 0) >>= print) ms > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup i f happy-1.20.1.1/examples/glr/expr-monad/Makefile0000644000000000000000000000056207346545000017363 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs DECODE = --decode expr : Expr.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "1+2*4-3" eof : expr echo testing premature eof ./expr "1+2*" err : expr echo testing syntax error ./expr "1+2*2++3" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-1.20.1.1/examples/glr/expr-monad/README0000644000000000000000000000063007346545000016577 0ustar0000000000000000 Example of arithmetic expression parsing, with decoding of semantic values (ie it gives a list of possible results of computation). BUT: it runs the computations under a monad. In this example, certain cases of addition fail, which are caught and shown as zeros. "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () happy-1.20.1.1/examples/glr/expr-tree/0000755000000000000000000000000007346545000015561 5ustar0000000000000000happy-1.20.1.1/examples/glr/expr-tree/Expr.y0000644000000000000000000000131507346545000016671 0ustar0000000000000000{ -- only list imports here import Data.Char import Tree } %tokentype { Token } %lexer { lexer } { TokenEOF } %token '*' { Sym '*' } '+' { Sym '+' } '-' { Sym '-' } '(' { Sym '(' } ')' { Sym ')' } i { AnInt $$ } %% E :: {Tree ForestId Int} : E '+' E { Plus $1 $3 } | E '*' E { Times $1 $3 } | E '-' E { Minus $1 $3 } | '(' E ')' { Pars $2 } | i { Const $1 } { data Token = TokenEOF | Sym Char | AnInt {getInt :: Int} deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-1.20.1.1/examples/glr/expr-tree/Hugs.lhs0000644000000000000000000000107107346545000017176 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-1.20.1.1/examples/glr/expr-tree/Main.lhs0000644000000000000000000000151407346545000017156 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > (s:o) <- getArgs > let x = concat o > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ (if 'f' `elem` x then unlines (map show $ Map.toList f) else "") > if 'g' `elem` x then toDV (Map.toList f) else return () > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup f i happy-1.20.1.1/examples/glr/expr-tree/Makefile0000644000000000000000000000101507346545000017216 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs OPT = -O2 expr : Expr.hs Main.lhs # might want to run happy with --ghc ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "1+2*4-3" runn : expr ./expr +RTS -s -RTS `perl -e 'print join ("+", (1 .. ${NUM}));'` | tee out-${NUM} cat expr.stat >> out-${NUM} eof : expr echo testing premature eof ./expr "1+2*" err : expr echo testing syntax error ./expr "1+2*2++3" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-1.20.1.1/examples/glr/expr-tree/README0000644000000000000000000000056307346545000016445 0ustar0000000000000000 Example of arithmetic expression parsing, but producing a labelled forest. Note use of polymorphic type in the labels. See the code more more discussion. "make run" to run the test case. For Hugs, load up Hugs.lhs - it is a simplified version of Main, with entry point "test :: String -> IO ()" NOTE: you need the -98 flag on Hugs, owing to non-standard class use happy-1.20.1.1/examples/glr/expr-tree/Tree.lhs0000644000000000000000000000106407346545000017171 0ustar0000000000000000> module Tree where > data Tree a b > = Plus a a > | Times a a > | Minus a a > | Pars a > | Const b > deriving (Show) Note: + we need a construct for the location of parentheses + sometimes it is useful to keep this information anyway -- eg ghc's implementation of customisable prec & assoc. + I've left Trees polymorphic in the "branch" type - this supports labelling the forest with Int-based trees then switching to Tree-based trees later + But this might require some non-Haskell-98 flags for the related class instances. happy-1.20.1.1/examples/glr/hidden-leftrec/0000755000000000000000000000000007346545000016523 5ustar0000000000000000happy-1.20.1.1/examples/glr/hidden-leftrec/Expr.y0000644000000000000000000000101407346545000017627 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token '+' { Sym '+' } i { AnInt $$ } %% R : Q {} Q : B Q i {} | S {} S : A S i {} | '+' {} | Q i {} B : {} A : {} { data Token = TokenEOF | Sym Char | AnInt Int deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-1.20.1.1/examples/glr/hidden-leftrec/Hugs.lhs0000644000000000000000000000107107346545000020140 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-1.20.1.1/examples/glr/hidden-leftrec/Main.lhs0000644000000000000000000000133207346545000020116 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup f i happy-1.20.1.1/examples/glr/hidden-leftrec/Makefile0000644000000000000000000000053607346545000020167 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs expr : Expr.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "+ 1 1 1 1 1 1 " eof : expr echo testing premature eof ./expr "" err : expr echo testing syntax error ./expr "+ 1 +" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-1.20.1.1/examples/glr/hidden-leftrec/README0000644000000000000000000000141607346545000017405 0ustar0000000000000000 Example of hidden left recursion The key point is that it has rules of form (X -> A X z), where A may match the empty string. The original GLR algorithm will loop on such productions, since the reduction (A -> empty) is always possible. The grammar is based on the one in Rekers[1], pointed out to me by Joost Visser. Q -> A Q i | + A -> I have made it a bit more complex, adding a second layer of hidden recursion and allowing jumps from the second layer to the first. --- "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () Don't forget to look at the graphs! --- [1] J. Rekers, "Parser Generation for Interactive Environments", PhD thesis, University of Amsterdam 1992. happy-1.20.1.1/examples/glr/highly-ambiguous/0000755000000000000000000000000007346545000017123 5ustar0000000000000000happy-1.20.1.1/examples/glr/highly-ambiguous/Expr.y0000644000000000000000000000135107346545000020233 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token 'b' { Sym _ } %% -- grammar taken from -- "Generalised LR Parsing in Haskell" -- Joao Fernandes, Joao Saraiva, and Joost Visser -- Universidade do Minho, Braga, Portugal -- submitted to AFP'04 summer school -- (Original source of grammar not identified by them) S : T {} T : A 'b' {} | T T T {} A : T 'b' A A A {} | T T 'b' {} | {} { data Token = TokenEOF | Sym Char | AnInt Int deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-1.20.1.1/examples/glr/highly-ambiguous/Hugs.lhs0000644000000000000000000000112607346545000020541 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test (read s :: Int) > test n > = do > case doParse $ map (:[]) $ lexer $ replicate n '+' of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-1.20.1.1/examples/glr/highly-ambiguous/Main.lhs0000644000000000000000000000136107346545000020520 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer $ replicate (read s) '+' of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup f i happy-1.20.1.1/examples/glr/highly-ambiguous/Makefile0000644000000000000000000000053107346545000020562 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs expr : Expr.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs NUM=20 run : expr ./expr +RTS -s -RTS ${NUM} | grep ^Ok run30 : make run NUM=30 test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci tar : tar chzf aj2.tgz Expr*hs Expr*y Main*hs D*hs happy-1.20.1.1/examples/glr/highly-ambiguous/README0000644000000000000000000000141507346545000020004 0ustar0000000000000000 Example of a highly ambiguous grammar It is a grammar taken from [1], although it is an example from the literature (the draft paper didn't mention which source). There is an explosion of possibilities because many parse stacks need to be kept active. Inputs of sizes above 25 will get very expensive to parse, with the current parser driver; but this seems no worse (if not better) than other implementations that produce a packed forest. --- "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () --- [1] "Generalised LR Parsing in Haskell" Joao Fernandes, Joao Saraiva, and Joost Visser Universidade do Minho, Braga, Portugal submitted to AFP'04 summer school happy-1.20.1.1/examples/glr/nlp/0000755000000000000000000000000007346545000014437 5ustar0000000000000000happy-1.20.1.1/examples/glr/nlp/English.y0000644000000000000000000000226707346545000016231 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token det { Det $$ } prep { Prep $$ } noun { Noun $$ } transvb { Verb Trans $$ } intransvb { Verb Intrans $$ } %% S : NP VP {} NP : det noun {} | NP PP {} PP : prep NP {} VP : transvb NP {} | intransvb {} | VP PP {} { data Token = TokenEOF | Noun String | Verb Arity String | Prep String | Det String deriving (Show,Eq,Ord) data Arity = Trans | Intrans deriving (Show,Eq,Ord) lexer :: String -> [[Token]] lexer = map lex_word . words -- simple lexicon -- (no claims to accuracy) lex_word w@"the" = [Det w] lex_word w@"a" = [Det w] lex_word w@"some" = [Det w] lex_word w@"in" = [Prep w] lex_word w@"with" = [Prep w] lex_word w@"park" = [Verb Trans w, Noun w] lex_word w@"man" = [Verb Trans w, Noun w] lex_word w@"saw" = [Verb Trans w, Verb Intrans w, Noun w] lex_word w@"run" = [Verb Trans w, Verb Intrans w, Noun w] lex_word w@"race" = [Verb Trans w, Verb Intrans w, Noun w] lex_word w@"telescope" = [Verb Trans w, Verb Intrans w, Noun w] lex_word w = error $ "Not know: " ++ show w } happy-1.20.1.1/examples/glr/nlp/Hugs.lhs0000644000000000000000000000106007346545000016052 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import English > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-1.20.1.1/examples/glr/nlp/Main.lhs0000644000000000000000000000122407346545000016032 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import English #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f happy-1.20.1.1/examples/glr/nlp/Makefile0000644000000000000000000000067007346545000016102 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs english : English.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o english --make Main.lhs run : english ./english "the man saw the race with a telescope" eof : english echo testing premature eof ./english "the man saw a" err : english echo testing syntax error ./english "the the man saw race" test : run eof err clean : rm -rf english English.info English.hs EnglishData.hs *.o *.hi out.daVinci happy-1.20.1.1/examples/glr/nlp/README0000644000000000000000000000105307346545000015316 0ustar0000000000000000 Obligatory NL ambiguity example. The grammar is small and simple, but exhibits prepositional phrase attachment ambiguity. Example: "the man saw the race with a telescope" Can be bracketed as the following (a) "the man saw (the race with a telescope)" (b) "(the man saw the race) with a telescope" Note: the "lexicon" contains some ambiguous words too - see if you can extend the grammar so this comes into play. "make run" to run the test case. For Hugs, load up Hugs.lhs - it is a simplified version, with entry point "test :: String -> IO ()" happy-1.20.1.1/examples/glr/packing/0000755000000000000000000000000007346545000015262 5ustar0000000000000000happy-1.20.1.1/examples/glr/packing/Expr.y0000644000000000000000000000057007346545000016374 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token i { Thing } %% S : A S {} | {} A : B {} B : C {} C : D {} | E {} D : i {} E : i F {} F : {} { data Token = TokenEOF | Thing deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) = Thing : lexer cs } happy-1.20.1.1/examples/glr/packing/Hugs.lhs0000644000000000000000000000107107346545000016677 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-1.20.1.1/examples/glr/packing/Main.lhs0000644000000000000000000000133207346545000016655 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup f i happy-1.20.1.1/examples/glr/packing/Makefile0000644000000000000000000000064407346545000016726 0ustar0000000000000000TOP = .. include $(TOP)/Makefile.defs FILTER = --filter FILTER = .y.hs : ${HAPPY} -i -l $*.y ${FILTER} expr : Expr.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "+ 1 1 1 1 1 1 " eof : expr echo testing premature eof ./expr "" err : expr echo testing syntax error ./expr "+ 1 +" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-1.20.1.1/examples/glr/packing/README0000644000000000000000000000053407346545000016144 0ustar0000000000000000 Test case for packing Grammar allows different (asymmetric) routes for category C, which may get packed at different times --- "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () correct behaviour is packing of ambiguity for all C nodes (for D and E). happy-1.20.1.1/examples/igloo/0000755000000000000000000000000007346545000014173 5ustar0000000000000000happy-1.20.1.1/examples/igloo/Foo.hs0000644000000000000000000000037407346545000015256 0ustar0000000000000000 module Main (main) where import Parser (parse) import System.IO (hPutStrLn, stderr) main :: IO () main = do x <- getContents case parse x of Left e -> hPutStrLn stderr $ "Failed with: " ++ e Right t -> print t happy-1.20.1.1/examples/igloo/Lexer.x0000644000000000000000000000173607346545000015452 0ustar0000000000000000 { module Lexer (lex_tok) where import Control.Monad.State (StateT, get) import ParserM (ParserM (..), mkT, Token(..), St, start_code, StartCode, Action, set_start_code, show_pos, position, input, AlexInput, alexGetByte, alexInputPrevChar) } words :- <0> $white+ ; <0> fork { mkT TFork } <0> leaf { mkT TLeaf } { get_tok :: AlexInput -> StateT St (Either String) (Token, AlexInput) get_tok = \i -> do st <- get case alexScan i (start_code st) of AlexEOF -> return (TEOF, i) AlexError _ -> fail $ "Lexical error at " ++ show_pos (position i) AlexSkip i' _ -> get_tok i' AlexToken i' l a -> a (i', take l (input i)) begin :: StartCode -> Action begin sc (i, _) = do set_start_code sc get_tok i lex_tok :: (Token -> ParserM a) -> ParserM a lex_tok cont = ParserM $ \i -> do (tok, iz) <- get_tok i case cont tok of ParserM x -> x iz } happy-1.20.1.1/examples/igloo/Makefile0000644000000000000000000000034407346545000015634 0ustar0000000000000000 all: alex Lexer.x happy Parser.y ghc --make Foo -o foo test: echo fork leaf leaf | ./foo -echo fork leaf leafqleaf | ./foo -echo leaf leaf leaf leaf leaf | ./foo @echo ok clean: rm -f *.o *.hi Parser.hs Lexer.hs foo happy-1.20.1.1/examples/igloo/Parser.y0000644000000000000000000000072407346545000015624 0ustar0000000000000000 { module Parser (parse) where import Lexer (lex_tok) import ParserM (Token(..), Tree(..), ParserM, run_parser, get_pos, show_pos, happyError) } %name parsex tree %tokentype { Token } %monad { ParserM } %lexer { lex_tok } { TEOF } %token fork { TFork } leaf { TLeaf } %% tree :: { Tree } tree : leaf { Leaf } | fork tree tree { Fork $2 $3 } { parse :: String -> Either String Tree parse = run_parser parsex } happy-1.20.1.1/examples/igloo/ParserM.hs0000644000000000000000000000566407346545000016113 0ustar0000000000000000 module ParserM ( -- Parser Monad ParserM(..), AlexInput, run_parser, -- Parser state St, StartCode, start_code, set_start_code, -- Tokens Token(..), -- Tree Tree(..), -- Actions Action, andBegin, mkT, -- Positions get_pos, show_pos, -- Input alexGetByte, alexInputPrevChar, input, position, -- Other happyError ) where import Control.Applicative (Applicative(..)) import Control.Monad (ap, liftM) import Control.Monad.Except (throwError) import Control.Monad.State (StateT, evalStateT, get, put) import Control.Monad.Trans (lift) import Data.Char (ord) import Data.Word (Word8) -- Parser Monad newtype ParserM a = ParserM (AlexInput -> StateT St (Either String) (AlexInput, a)) instance Functor ParserM where fmap = liftM instance Applicative ParserM where pure a = ParserM $ \i -> return (i, a) (<*>) = ap instance Monad ParserM where return = pure ParserM m >>= k = ParserM $ \i -> do (i', x) <- m i case k x of ParserM y -> y i' fail err = ParserM $ \_ -> fail err run_parser :: ParserM a -> (String -> Either String a) run_parser (ParserM p) = \s -> case evalStateT (p (AlexInput init_pos s)) init_state of Left es -> throwError es Right (_, x) -> return x -- Parser state data St = St {start_code :: !StartCode} type StartCode = Int init_state :: St init_state = St 0 -- Tokens data Token = TEOF | TFork | TLeaf -- Tree data Tree = Leaf | Fork Tree Tree deriving Show -- Actions type Action = (AlexInput, String) -> StateT St (Either String) (Token, AlexInput) set_start_code :: StartCode -> StateT St (Either String) () set_start_code sc = do st <- get put $ st { start_code = sc } andBegin :: Action -> StartCode -> Action (act `andBegin` sc) x = do set_start_code sc act x mkT :: Token -> Action mkT t (p,_) = lift $ return (t, p) -- Positions data Pos = Pos !Int{- Line -} !Int{- Column -} get_pos :: ParserM Pos get_pos = ParserM $ \i@(AlexInput p _) -> return (i, p) alexMove :: Pos -> Char -> Pos alexMove (Pos l _) '\n' = Pos (l+1) 1 alexMove (Pos l c) '\t' = Pos l ((c+8) `div` 8 * 8) alexMove (Pos l c) _ = Pos l (c+1) init_pos :: Pos init_pos = Pos 1 1 show_pos :: Pos -> String show_pos (Pos l c) = "line " ++ show l ++ ", column " ++ show c -- Input data AlexInput = AlexInput {position :: !Pos, input :: String} alexGetByte :: AlexInput -> Maybe (Word8,AlexInput) alexGetByte (AlexInput p (x:xs)) = Just (fromIntegral (ord x), AlexInput (alexMove p x) xs) alexGetByte (AlexInput _ []) = Nothing alexInputPrevChar :: AlexInput -> Char alexInputPrevChar _ = error "Lexer doesn't implement alexInputPrevChar" happyError :: ParserM a happyError = do p <- get_pos fail $ "Parse error at " ++ show_pos p happy-1.20.1.1/examples/igloo/README0000644000000000000000000000125707346545000015060 0ustar0000000000000000From: Ian Lynagh [igloo@earth.li] Subject: happy and line numbers Date: Thu 12/02/2004 18:48 I think it would be nice to have an example of how to have a nice position tracking monadic parser calling a lexer per token in the examples directory. I've attached a cut-down parser of mine that does it well enough for me. The only slight niggle is that parse errors are reported at the end of the token rather than the start, but that hasn't bothered me enough to look into fixing it yet. The cut down parser doesn't use start codes, but I've left the machinery in to make it easier for people to see how to use them. Naturally any suggestions for improving it would be gladly received! happy-1.20.1.1/happy.cabal0000644000000000000000000001326307346545000013356 0ustar0000000000000000name: happy version: 1.20.1.1 license: BSD2 license-file: LICENSE copyright: (c) Andy Gill, Simon Marlow author: Andy Gill and Simon Marlow maintainer: https://github.com/haskell/happy bug-reports: https://github.com/haskell/happy/issues stability: stable homepage: https://www.haskell.org/happy/ synopsis: Happy is a parser generator for Haskell category: Development cabal-version: >= 1.10 build-type: Simple Description: Happy is a parser generator for Haskell. Given a grammar specification in BNF, Happy generates Haskell code to parse the grammar. Happy works in a similar way to the @yacc@ tool for C. tested-with: GHC==7.10.3, GHC==8.0.2, GHC==8.2.2, GHC==8.4.4, GHC==8.6.5, GHC==8.8.4, GHC==8.10.7, GHC==9.0.2, GHC==9.2.5, GHC==9.4.4, GHC==9.6.0 data-dir: data/ data-files: HappyTemplate HappyTemplate-arrays HappyTemplate-arrays-coerce HappyTemplate-arrays-coerce-debug HappyTemplate-arrays-debug HappyTemplate-arrays-ghc HappyTemplate-arrays-ghc-debug HappyTemplate-coerce HappyTemplate-ghc GLR_Base GLR_Lib GLR_Lib-ghc GLR_Lib-ghc-debug extra-source-files: ChangeLog.md Makefile doc/Makefile doc/aclocal.m4 doc/config.mk.in doc/configure.ac doc/docbook-xml.mk doc/fptools.css doc/happy.1.in doc/happy.xml examples/glr/nlp/Main.lhs examples/glr/nlp/Makefile examples/glr/nlp/README examples/glr/nlp/English.y examples/glr/nlp/Hugs.lhs examples/glr/Makefile examples/glr/Makefile.defs examples/glr/expr-eval/Main.lhs examples/glr/expr-eval/Makefile examples/glr/expr-eval/Expr.y examples/glr/expr-eval/README examples/glr/expr-eval/Hugs.lhs examples/glr/expr-tree/Main.lhs examples/glr/expr-tree/Makefile examples/glr/expr-tree/Expr.y examples/glr/expr-tree/README examples/glr/expr-tree/Tree.lhs examples/glr/expr-tree/Hugs.lhs examples/glr/highly-ambiguous/Main.lhs examples/glr/highly-ambiguous/Makefile examples/glr/highly-ambiguous/Expr.y examples/glr/highly-ambiguous/README examples/glr/highly-ambiguous/Hugs.lhs examples/glr/hidden-leftrec/Main.lhs examples/glr/hidden-leftrec/Makefile examples/glr/hidden-leftrec/Expr.y examples/glr/hidden-leftrec/README examples/glr/hidden-leftrec/Hugs.lhs examples/glr/expr-monad/Main.lhs examples/glr/expr-monad/Makefile examples/glr/expr-monad/Expr.y examples/glr/expr-monad/README examples/glr/expr-monad/Hugs.lhs examples/glr/bio-eg/Main.lhs examples/glr/bio-eg/Makefile examples/glr/bio-eg/Bio.y examples/glr/bio-eg/README examples/glr/bio-eg/1-1200.dna examples/glr/bio-eg/1-600.dna examples/glr/common/DV_lhs examples/glr/common/DaVinciTypes.hs examples/glr/packing/Main.lhs examples/glr/packing/Makefile examples/glr/packing/Expr.y examples/glr/packing/README examples/glr/packing/Hugs.lhs examples/PgnParser.ly examples/MonadTest.ly examples/igloo/ParserM.hs examples/igloo/Makefile examples/igloo/Parser.y examples/igloo/Foo.hs examples/igloo/README examples/igloo/Lexer.x examples/README examples/Calc.ly examples/DavesExample.ly examples/ErrorTest.ly examples/ErlParser.ly examples/SimonsExample.ly examples/LexerTest.ly happy.spec src/ARRAY-NOTES tests/AttrGrammar001.y tests/AttrGrammar002.y tests/Makefile tests/Partial.ly tests/Test.ly tests/TestMulti.ly tests/TestPrecedence.ly tests/bogus-token.y tests/bug001.ly tests/bug002.y tests/error001.stderr tests/error001.stdout tests/error001.y tests/monad001.y tests/monad002.ly tests/monaderror.y tests/precedence001.ly tests/precedence002.y tests/test_rules.y tests/issue91.y tests/issue93.y tests/issue94.y tests/issue95.y tests/monaderror-explist.y tests/typeclass_monad001.y tests/typeclass_monad002.ly tests/typeclass_monad_lexer.y tests/rank2.y tests/shift01.y source-repository head type: git location: https://github.com/haskell/happy.git executable happy hs-source-dirs: src main-is: Main.lhs build-depends: base < 5, array, containers >= 0.4.2, mtl >= 2.2.1 -- mtl-2.2.1 added Control.Monad.Except default-language: Haskell98 default-extensions: CPP, MagicHash, FlexibleContexts ghc-options: -Wall other-modules: Paths_happy AbsSyn First GenUtils Grammar Info LALR Lexer ParseMonad Parser ProduceCode ProduceGLRCode NameSet Target AttrGrammar AttrGrammarParser ParamRules PrettyGrammar if impl(ghc >= 9.2) ghc-options: -Wno-incomplete-uni-patterns test-suite tests type: exitcode-stdio-1.0 main-is: test.hs -- This line is important as it ensures that the local `exe:happy` component declared above is built before the test-suite component is invoked, as well as making sure that `happy` is made available on $PATH and `$happy_datadir` is set accordingly before invoking `test.hs` build-tools: happy build-depends: base, process default-language: Haskell98 happy-1.20.1.1/happy.spec0000644000000000000000000000335607346545000013250 0ustar0000000000000000%define name happy %define version 1.17 %define release 1 Name: %{name} Version: %{version} Release: %{release} License: BSD-like Group: Development/Languages/Haskell URL: http://haskell.org/happy/ Source: http://haskell.org/happy/dist/%{version}/happy-%{version}.tar.gz Packager: Sven Panne BuildRoot: %{_tmppath}/%{name}-%{version}-build Prefix: %{_prefix} BuildRequires: ghc, docbook-dtd, docbook-xsl-stylesheets, libxslt, libxml2, fop, xmltex, dvips Summary: The LALR(1) Parser Generator for Haskell %description Happy is a parser generator system for Haskell, similar to the tool `yacc' for C. Like `yacc', it takes a file containing an annotated BNF specification of a grammar and produces a Haskell module containing a parser for the grammar. Happy is flexible: you can have several Happy parsers in the same program, and several entry points to a single grammar. Happy can work in conjunction with a lexical analyser supplied by the user (either hand-written or generated by another program), or it can parse a stream of characters directly (but this isn't practical in most cases). Authors: -------- Simon Marlow Andy Gill %prep %setup %build runhaskell Setup.lhs configure --prefix=%{_prefix} --docdir=%{_datadir}/doc/packages/%{name} runhaskell Setup.lhs build cd doc test -f configure || autoreconf ./configure make html %install runhaskell Setup.lhs copy --destdir=${RPM_BUILD_ROOT} %clean rm -rf ${RPM_BUILD_ROOT} %files %defattr(-,root,root) %doc ANNOUNCE %doc CHANGES %doc LICENSE %doc README %doc TODO %doc doc/happy %doc examples %{prefix}/bin/happy %{prefix}/share/happy-%{version} happy-1.20.1.1/src/0000755000000000000000000000000007346545000012033 5ustar0000000000000000happy-1.20.1.1/src/ARRAY-NOTES0000644000000000000000000000204707346545000013525 0ustar0000000000000000Numbering of tokens inside Happy: epsilon = 0 error = 1 dummy = 2 %start = 3..s non-terminals = s..n terminals = n..m %eof = m where n_nonterminals = n - 3 (including %starts) n_terminals = 1{-error-} + (m-n) + 1{-eof-} (including error and %eof) In normal and GHC-based parsers, these numbers are also used in the generated grammar itself, except that the error token is mapped to -1. In an array-based parser, things are a little different. ----------------------------------------------------------------------------- Action Table We have an action table, indexed by states in the y direction, and terminal number in the x direction. ie. action = (state * n_terminals + terminal). The terminal number is given by (for terminals only): tok_number - n_nonterminals - 3 so we have error = 0 terminals = 1..n %eof = n+1 ----------------------------------------------------------------------------- Goto Table The goto table is indexed by nonterminal number (without %starts), ie (state * (n_nonterminals-s)) + tok_number - s happy-1.20.1.1/src/AbsSyn.lhs0000644000000000000000000001373207346545000013750 0ustar0000000000000000----------------------------------------------------------------------------- Abstract syntax for grammar files. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- Here is the abstract syntax of the language we parse. > module AbsSyn ( > AbsSyn(..), Directive(..), ErrorHandlerType(..), > getTokenType, getTokenSpec, getParserNames, getLexer, > getImportedIdentity, getMonad, getError, > getPrios, getPrioNames, getExpect, getErrorHandlerType, > getAttributes, getAttributetype, > Rule(..), Prod(..), Term(..), Prec(..) > ) where > data AbsSyn > = AbsSyn > (Maybe String) -- header > [Directive String] -- directives > [Rule] -- productions > (Maybe String) -- footer > data Rule > = Rule > String -- name of the rule > [String] -- parameters (see parametrized productions) > [Prod] -- productions > (Maybe String) -- type of the rule > data Prod > = Prod > [Term] -- terms that make up the rule > String -- code body that runs when the rule reduces > Int -- line number > Prec -- inline precedence annotation for the rule > data Term > = App > String -- name of the term > [Term] -- parameter arguments (usually this is empty) > data Prec > = PrecNone -- no user-specified precedence > | PrecShift -- %shift > | PrecId String -- %prec ID #ifdef DEBUG > deriving Show #endif %----------------------------------------------------------------------------- Parser Generator Directives. ToDo: find a consistent way to analyse all the directives together and generate some error messages. > data ErrorHandlerType > = ErrorHandlerTypeDefault > | ErrorHandlerTypeExpList > > data Directive a > = TokenType String -- %tokentype > | TokenSpec [(a,String)] -- %token > | TokenName String (Maybe String) Bool -- %name/%partial (True <=> %partial) > | TokenLexer String String -- %lexer > | TokenErrorHandlerType String -- %errorhandlertype > | TokenImportedIdentity -- %importedidentity > | TokenMonad String String String String -- %monad > | TokenNonassoc [String] -- %nonassoc > | TokenRight [String] -- %right > | TokenLeft [String] -- %left > | TokenExpect Int -- %expect > | TokenError String -- %error > | TokenAttributetype String -- %attributetype > | TokenAttribute String String -- %attribute #ifdef DEBUG > deriving Show #endif > getTokenType :: [Directive t] -> String > getTokenType ds > = case [ t | (TokenType t) <- ds ] of > [t] -> t > [] -> error "no token type given" > _ -> error "multiple token types" > getParserNames :: [Directive t] -> [Directive t] > getParserNames ds = [ t | t@(TokenName _ _ _) <- ds ] > getLexer :: [Directive t] -> Maybe (String, String) > getLexer ds > = case [ (a,b) | (TokenLexer a b) <- ds ] of > [t] -> Just t > [] -> Nothing > _ -> error "multiple lexer directives" > getImportedIdentity :: [Directive t] -> Bool > getImportedIdentity ds > = case [ (()) | TokenImportedIdentity <- ds ] of > [_] -> True > [] -> False > _ -> error "multiple importedidentity directives" > getMonad :: [Directive t] -> (Bool, String, String, String, String) > getMonad ds > = case [ (True,a,b,c,d) | (TokenMonad a b c d) <- ds ] of > [t] -> t > [] -> (False,"()","HappyIdentity","Prelude.>>=","Prelude.return") > _ -> error "multiple monad directives" > getTokenSpec :: [Directive t] -> [(t, String)] > getTokenSpec ds = concat [ t | (TokenSpec t) <- ds ] > getPrios :: [Directive t] -> [Directive t] > getPrios ds = [ d | d <- ds, > case d of > TokenNonassoc _ -> True > TokenLeft _ -> True > TokenRight _ -> True > _ -> False > ] > getPrioNames :: Directive t -> [String] > getPrioNames (TokenNonassoc s) = s > getPrioNames (TokenLeft s) = s > getPrioNames (TokenRight s) = s > getPrioNames _ = error "Not an associativity token" > getExpect :: [Directive t] -> Maybe Int > getExpect ds > = case [ n | (TokenExpect n) <- ds ] of > [t] -> Just t > [] -> Nothing > _ -> error "multiple expect directives" > getError :: [Directive t] -> Maybe String > getError ds > = case [ a | (TokenError a) <- ds ] of > [t] -> Just t > [] -> Nothing > _ -> error "multiple error directives" > getErrorHandlerType :: [Directive t] -> ErrorHandlerType > getErrorHandlerType ds > = case [ a | (TokenErrorHandlerType a) <- ds ] of > [t] -> case t of > "explist" -> ErrorHandlerTypeExpList > "default" -> ErrorHandlerTypeDefault > _ -> error "unsupported %errorhandlertype value" > [] -> ErrorHandlerTypeDefault > _ -> error "multiple errorhandlertype directives" > getAttributes :: [Directive t] -> [(String, String)] > getAttributes ds > = [ (ident,typ) | (TokenAttribute ident typ) <- ds ] > getAttributetype :: [Directive t] -> Maybe String > getAttributetype ds > = case [ t | (TokenAttributetype t) <- ds ] of > [t] -> Just t > [] -> Nothing > _ -> error "multiple attributetype directives" happy-1.20.1.1/src/AttrGrammar.lhs0000644000000000000000000000737307346545000014776 0ustar0000000000000000> module AttrGrammar > ( AgToken (..) > , AgRule (..) > , agLexAll > , agLexer > , subRefVal > , selfRefVal > , rightRefVal > ) where > import Data.Char > import ParseMonad > data AgToken > = AgTok_LBrace > | AgTok_RBrace > | AgTok_Where > | AgTok_Semicolon > | AgTok_Eq > | AgTok_SelfRef String > | AgTok_SubRef (Int, String) > | AgTok_RightmostRef String > | AgTok_Unknown String > | AgTok_EOF > deriving (Show,Eq,Ord) > subRefVal :: AgToken -> (Int, String) > subRefVal (AgTok_SubRef x) = x > subRefVal _ = error "subRefVal: Bad value" > selfRefVal :: AgToken -> String > selfRefVal (AgTok_SelfRef x) = x > selfRefVal _ = error "selfRefVal: Bad value" > rightRefVal :: AgToken -> String > rightRefVal (AgTok_RightmostRef x) = x > rightRefVal _ = error "rightRefVal: Bad value" > data AgRule > = SelfAssign String [AgToken] > | SubAssign (Int,String) [AgToken] > | RightmostAssign String [AgToken] > | Conditional [AgToken] > deriving (Show,Eq,Ord) ----------------------------------------------------------------- -- For the most part, the body of the attribute grammar rules -- is uninterpreted haskell expressions. We only need to know about -- a) braces and semicolons to break the rules apart -- b) the equals sign to break the rules into LValues and the RHS -- c) attribute references, which are $$, $x (postivie integer x) -- or $> (for the rightmost symbol) followed by an optional -- attribute specifier, which is a dot followed by a -- Haskell variable identifier -- Examples: -- $$ -- $1 -- $> -- $$.pos -- $3.value -- $2.someAttribute0' -- -- Everything else can be treated as uninterpreted strings. Our munging -- will wreck column alignment so attribute grammar specifications must -- not rely on layout. > type Pfunc a = String -> Int -> ParseResult a > agLexAll :: P [AgToken] > agLexAll = mkP $ aux [] > where aux toks [] _ = Right (reverse toks) > aux toks s l = agLexer' (\t -> aux (t:toks)) s l > agLexer :: (AgToken -> P a) -> P a > agLexer m = mkP $ agLexer' (\x -> runP (m x)) > agLexer' :: (AgToken -> Pfunc a) -> Pfunc a > agLexer' cont [] = cont AgTok_EOF [] > agLexer' cont ('{':rest) = cont AgTok_LBrace rest > agLexer' cont ('}':rest) = cont AgTok_RBrace rest > agLexer' cont (';':rest) = cont AgTok_Semicolon rest > agLexer' cont ('=':rest) = cont AgTok_Eq rest > agLexer' cont ('w':'h':'e':'r':'e':rest) = cont AgTok_Where rest > agLexer' cont ('$':'$':rest) = agLexAttribute cont (\a -> AgTok_SelfRef a) rest > agLexer' cont ('$':'>':rest) = agLexAttribute cont (\a -> AgTok_RightmostRef a) rest > agLexer' cont s@('$':rest) = > let (n,rest') = span isDigit rest > in if null n > then agLexUnknown cont s > else agLexAttribute cont (\a -> AgTok_SubRef (read n,a)) rest' > agLexer' cont s@(c:rest) > | isSpace c = agLexer' cont (dropWhile isSpace rest) > | otherwise = agLexUnknown cont s > agLexUnknown :: (AgToken -> Pfunc a) -> Pfunc a > agLexUnknown cont s = let (u,rest) = aux [] s in cont (AgTok_Unknown u) rest > where aux t [] = (reverse t,[]) > aux t ('$':c:cs) > | c /= '$' && not (isDigit c) = aux ('$':t) (c:cs) > | otherwise = (reverse t,'$':c:cs) > aux t (c:cs) > | isSpace c || c `elem` "{};=" = (reverse t,c:cs) > | otherwise = aux (c:t) cs > agLexAttribute :: (AgToken -> Pfunc a) -> (String -> AgToken) -> Pfunc a > agLexAttribute cont k ('.':x:xs) > | isLower x = let (ident,rest) = span (\c -> isAlphaNum c || c == '\'') xs in cont (k (x:ident)) rest > | otherwise = \_ -> Left "bad attribute identifier" > agLexAttribute cont k rest = cont (k "") rest happy-1.20.1.1/src/AttrGrammarParser.hs0000644000000000000000000006521307346545000015774 0ustar0000000000000000{-# OPTIONS_GHC -w #-} {-# OPTIONS -XMagicHash -XBangPatterns -XTypeSynonymInstances -XFlexibleInstances -cpp #-} #if __GLASGOW_HASKELL__ >= 710 {-# OPTIONS_GHC -XPartialTypeSignatures #-} #endif {-# OPTIONS_GHC -w #-} module AttrGrammarParser (agParser) where import ParseMonad import AttrGrammar import qualified Data.Array as Happy_Data_Array import qualified Data.Bits as Bits import qualified GHC.Exts as Happy_GHC_Exts import Control.Applicative(Applicative(..)) import Control.Monad (ap) -- parser produced by Happy Version 1.20.0 newtype HappyAbsSyn = HappyAbsSyn HappyAny #if __GLASGOW_HASKELL__ >= 607 type HappyAny = Happy_GHC_Exts.Any #else type HappyAny = forall a . a #endif newtype HappyWrap4 = HappyWrap4 ([AgRule]) happyIn4 :: ([AgRule]) -> (HappyAbsSyn ) happyIn4 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap4 x) {-# INLINE happyIn4 #-} happyOut4 :: (HappyAbsSyn ) -> HappyWrap4 happyOut4 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut4 #-} newtype HappyWrap5 = HappyWrap5 ([AgRule]) happyIn5 :: ([AgRule]) -> (HappyAbsSyn ) happyIn5 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap5 x) {-# INLINE happyIn5 #-} happyOut5 :: (HappyAbsSyn ) -> HappyWrap5 happyOut5 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut5 #-} newtype HappyWrap6 = HappyWrap6 (AgRule) happyIn6 :: (AgRule) -> (HappyAbsSyn ) happyIn6 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap6 x) {-# INLINE happyIn6 #-} happyOut6 :: (HappyAbsSyn ) -> HappyWrap6 happyOut6 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut6 #-} newtype HappyWrap7 = HappyWrap7 ([AgToken]) happyIn7 :: ([AgToken]) -> (HappyAbsSyn ) happyIn7 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap7 x) {-# INLINE happyIn7 #-} happyOut7 :: (HappyAbsSyn ) -> HappyWrap7 happyOut7 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut7 #-} newtype HappyWrap8 = HappyWrap8 ([AgToken]) happyIn8 :: ([AgToken]) -> (HappyAbsSyn ) happyIn8 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap8 x) {-# INLINE happyIn8 #-} happyOut8 :: (HappyAbsSyn ) -> HappyWrap8 happyOut8 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut8 #-} happyInTok :: (AgToken) -> (HappyAbsSyn ) happyInTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyInTok #-} happyOutTok :: (HappyAbsSyn ) -> (AgToken) happyOutTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOutTok #-} happyExpList :: HappyAddr happyExpList = HappyA# "\x00\xf0\x00\xc0\x03\x00\x00\x00\x01\x00\xe9\x01\x20\x00\x80\x00\x00\x02\x00\x00\x00\xa4\x07\x90\x1e\x40\x7a\x00\x00\x00\xb4\x07\x90\x1e\x40\x7a\x00\xe9\x01\xa4\x07\x90\x1e\x00\x3c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x40\x7b\x00\xed\x01\xb4\x07\xd0\x1e\x40\x7b\x00\xe9\x01\xb4\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\xe9\x01\x00\x00\xd0\x1e\x00\x00\x00\x00"# {-# NOINLINE happyExpListPerState #-} happyExpListPerState st = token_strs_expected where token_strs = ["error","%dummy","%start_agParser","agParser","rules","rule","code","code0","\"{\"","\"}\"","\";\"","\"=\"","where","selfRef","subRef","rightRef","unknown","%eof"] bit_start = st Prelude.* 18 bit_end = (st Prelude.+ 1) Prelude.* 18 read_bit = readArrayBit happyExpList bits = Prelude.map read_bit [bit_start..bit_end Prelude.- 1] bits_indexed = Prelude.zip bits [0..17] token_strs_expected = Prelude.concatMap f bits_indexed f (Prelude.False, _) = [] f (Prelude.True, nr) = [token_strs Prelude.!! nr] happyActOffsets :: HappyAddr happyActOffsets = HappyA# "\x0f\x00\x0f\x00\x00\x00\xfe\xff\x0a\x00\xff\xff\x02\x00\x19\x00\x05\x00\x0a\x00\x0a\x00\x0a\x00\x00\x00\x01\x00\x0a\x00\x0a\x00\x0a\x00\x0a\x00\x0a\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1c\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x0a\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1d\x00\x0a\x00\x00\x00\x01\x00\x00\x00\x00\x00"# happyGotoOffsets :: HappyAddr happyGotoOffsets = HappyA# "\x18\x00\x0b\x00\x00\x00\x00\x00\x1e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1f\x00\x20\x00\x21\x00\x00\x00\x22\x00\x24\x00\x25\x00\x26\x00\x27\x00\x28\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x29\x00\x2a\x00\x2b\x00\x2c\x00\x2d\x00\x2f\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x32\x00\x00\x00\x33\x00\x00\x00\x00\x00"# happyAdjustOffset :: Happy_GHC_Exts.Int# -> Happy_GHC_Exts.Int# happyAdjustOffset off = off happyDefActions :: HappyAddr happyDefActions = HappyA# "\xfb\xff\x00\x00\xfe\xff\xfc\xff\xf0\xff\x00\x00\x00\x00\x00\x00\x00\x00\xf0\xff\xf0\xff\xf0\xff\xf7\xff\xe8\xff\xf0\xff\xf0\xff\xf0\xff\xf0\xff\xf0\xff\xfb\xff\xfd\xff\xf1\xff\xf2\xff\xf3\xff\xf4\xff\xf5\xff\x00\x00\xe8\xff\xe8\xff\xe8\xff\xe8\xff\xe8\xff\xf0\xff\xe8\xff\xfa\xff\xf9\xff\xf8\xff\xe9\xff\xea\xff\xeb\xff\xec\xff\xee\xff\xed\xff\x00\x00\xf0\xff\xf6\xff\xe8\xff\xef\xff"# happyCheck :: HappyAddr happyCheck = HappyA# "\xff\xff\x03\x00\x01\x00\x04\x00\x03\x00\x04\x00\x04\x00\x06\x00\x07\x00\x08\x00\x09\x00\x01\x00\x01\x00\x02\x00\x04\x00\x0a\x00\x06\x00\x07\x00\x08\x00\x09\x00\x05\x00\x06\x00\x07\x00\x08\x00\x00\x00\x01\x00\x02\x00\x01\x00\x02\x00\x04\x00\x02\x00\x02\x00\xff\xff\x03\x00\x03\x00\x03\x00\x03\x00\xff\xff\x04\x00\x03\x00\x03\x00\x03\x00\x03\x00\x03\x00\xff\xff\x04\x00\x04\x00\x04\x00\x04\x00\x04\x00\x03\x00\xff\xff\x04\x00\x03\x00\xff\xff\x04\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"# happyTable :: HappyAddr happyTable = HappyA# "\x00\x00\x14\x00\x1c\x00\x0c\x00\x1d\x00\x1e\x00\x0b\x00\x1f\x00\x20\x00\x21\x00\x22\x00\x0e\x00\x02\x00\x03\x00\x0f\x00\xff\xff\x10\x00\x11\x00\x12\x00\x13\x00\x05\x00\x06\x00\x07\x00\x08\x00\x08\x00\x02\x00\x03\x00\x14\x00\x03\x00\x0a\x00\x2d\x00\x2f\x00\x00\x00\x0c\x00\x24\x00\x23\x00\x22\x00\x00\x00\x1a\x00\x19\x00\x18\x00\x17\x00\x16\x00\x15\x00\x00\x00\x2b\x00\x2a\x00\x29\x00\x28\x00\x27\x00\x26\x00\x00\x00\x25\x00\x2d\x00\x00\x00\x2f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyReduceArr = Happy_Data_Array.array (1, 23) [ (1 , happyReduce_1), (2 , happyReduce_2), (3 , happyReduce_3), (4 , happyReduce_4), (5 , happyReduce_5), (6 , happyReduce_6), (7 , happyReduce_7), (8 , happyReduce_8), (9 , happyReduce_9), (10 , happyReduce_10), (11 , happyReduce_11), (12 , happyReduce_12), (13 , happyReduce_13), (14 , happyReduce_14), (15 , happyReduce_15), (16 , happyReduce_16), (17 , happyReduce_17), (18 , happyReduce_18), (19 , happyReduce_19), (20 , happyReduce_20), (21 , happyReduce_21), (22 , happyReduce_22), (23 , happyReduce_23) ] happy_n_terms = 11 :: Prelude.Int happy_n_nonterms = 5 :: Prelude.Int happyReduce_1 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_1 = happySpecReduce_1 0# happyReduction_1 happyReduction_1 happy_x_1 = case happyOut5 happy_x_1 of { (HappyWrap5 happy_var_1) -> happyIn4 (happy_var_1 )} happyReduce_2 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_2 = happySpecReduce_3 1# happyReduction_2 happyReduction_2 happy_x_3 happy_x_2 happy_x_1 = case happyOut6 happy_x_1 of { (HappyWrap6 happy_var_1) -> case happyOut5 happy_x_3 of { (HappyWrap5 happy_var_3) -> happyIn5 (happy_var_1 : happy_var_3 )}} happyReduce_3 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_3 = happySpecReduce_1 1# happyReduction_3 happyReduction_3 happy_x_1 = case happyOut6 happy_x_1 of { (HappyWrap6 happy_var_1) -> happyIn5 (happy_var_1 : [] )} happyReduce_4 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_4 = happySpecReduce_0 1# happyReduction_4 happyReduction_4 = happyIn5 ([] ) happyReduce_5 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_5 = happySpecReduce_3 2# happyReduction_5 happyReduction_5 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_3 of { (HappyWrap7 happy_var_3) -> happyIn6 (SelfAssign (selfRefVal happy_var_1) happy_var_3 )}} happyReduce_6 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_6 = happySpecReduce_3 2# happyReduction_6 happyReduction_6 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_3 of { (HappyWrap7 happy_var_3) -> happyIn6 (SubAssign (subRefVal happy_var_1) happy_var_3 )}} happyReduce_7 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_7 = happySpecReduce_3 2# happyReduction_7 happyReduction_7 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_3 of { (HappyWrap7 happy_var_3) -> happyIn6 (RightmostAssign (rightRefVal happy_var_1) happy_var_3 )}} happyReduce_8 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_8 = happySpecReduce_2 2# happyReduction_8 happyReduction_8 happy_x_2 happy_x_1 = case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> happyIn6 (Conditional happy_var_2 )} happyReduce_9 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_9 = happyReduce 4# 3# happyReduction_9 happyReduction_9 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> case happyOutTok happy_x_3 of { happy_var_3 -> case happyOut7 happy_x_4 of { (HappyWrap7 happy_var_4) -> happyIn7 ([happy_var_1] ++ happy_var_2 ++ [happy_var_3] ++ happy_var_4 ) `HappyStk` happyRest}}}} happyReduce_10 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_10 = happySpecReduce_2 3# happyReduction_10 happyReduction_10 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> happyIn7 (happy_var_1 : happy_var_2 )}} happyReduce_11 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_11 = happySpecReduce_2 3# happyReduction_11 happyReduction_11 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> happyIn7 (happy_var_1 : happy_var_2 )}} happyReduce_12 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_12 = happySpecReduce_2 3# happyReduction_12 happyReduction_12 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> happyIn7 (happy_var_1 : happy_var_2 )}} happyReduce_13 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_13 = happySpecReduce_2 3# happyReduction_13 happyReduction_13 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> happyIn7 (happy_var_1 : happy_var_2 )}} happyReduce_14 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_14 = happySpecReduce_2 3# happyReduction_14 happyReduction_14 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> happyIn7 (happy_var_1 : happy_var_2 )}} happyReduce_15 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_15 = happySpecReduce_0 3# happyReduction_15 happyReduction_15 = happyIn7 ([] ) happyReduce_16 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_16 = happyReduce 4# 4# happyReduction_16 happyReduction_16 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> case happyOutTok happy_x_3 of { happy_var_3 -> case happyOut8 happy_x_4 of { (HappyWrap8 happy_var_4) -> happyIn8 ([happy_var_1] ++ happy_var_2 ++ [happy_var_3] ++ happy_var_4 ) `HappyStk` happyRest}}}} happyReduce_17 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_17 = happySpecReduce_2 4# happyReduction_17 happyReduction_17 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_18 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_18 = happySpecReduce_2 4# happyReduction_18 happyReduction_18 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_19 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_19 = happySpecReduce_2 4# happyReduction_19 happyReduction_19 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_20 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_20 = happySpecReduce_2 4# happyReduction_20 happyReduction_20 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_21 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_21 = happySpecReduce_2 4# happyReduction_21 happyReduction_21 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_22 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_22 = happySpecReduce_2 4# happyReduction_22 happyReduction_22 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_23 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_23 = happySpecReduce_0 4# happyReduction_23 happyReduction_23 = happyIn8 ([] ) happyNewToken action sts stk = agLexer(\tk -> let cont i = happyDoAction i tk action sts stk in case tk of { AgTok_EOF -> happyDoAction 10# tk action sts stk; AgTok_LBrace -> cont 1#; AgTok_RBrace -> cont 2#; AgTok_Semicolon -> cont 3#; AgTok_Eq -> cont 4#; AgTok_Where -> cont 5#; AgTok_SelfRef _ -> cont 6#; AgTok_SubRef _ -> cont 7#; AgTok_RightmostRef _ -> cont 8#; AgTok_Unknown _ -> cont 9#; _ -> happyError' (tk, []) }) happyError_ explist 10# tk = happyError' (tk, explist) happyError_ explist _ tk = happyError' (tk, explist) happyThen :: () => P a -> (a -> P b) -> P b happyThen = (Prelude.>>=) happyReturn :: () => a -> P a happyReturn = (Prelude.return) happyParse :: () => Happy_GHC_Exts.Int# -> P (HappyAbsSyn ) happyNewToken :: () => Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyDoAction :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduceArr :: () => Happy_Data_Array.Array Prelude.Int (Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn )) happyThen1 :: () => P a -> (a -> P b) -> P b happyThen1 = happyThen happyReturn1 :: () => a -> P a happyReturn1 = happyReturn happyError' :: () => ((AgToken), [Prelude.String]) -> P a happyError' tk = (\(tokens, explist) -> happyError) tk agParser = happySomeParser where happySomeParser = happyThen (happyParse 0#) (\x -> happyReturn (let {(HappyWrap4 x') = happyOut4 x} in x')) happySeq = happyDontSeq happyError :: P a happyError = failP ("Parse error\n") {-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) #else #define LT(n,m) (n Happy_GHC_Exts.<# m) #define GTE(n,m) (n Happy_GHC_Exts.>=# m) #define EQ(n,m) (n Happy_GHC_Exts.==# m) #endif data Happy_IntList = HappyCons Happy_GHC_Exts.Int# Happy_IntList infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept 0# tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = {- nothing -} case action of 0# -> {- nothing -} happyFail (happyExpListPerState ((Happy_GHC_Exts.I# (st)) :: Prelude.Int)) i tk st -1# -> {- nothing -} happyAccept i tk st n | LT(n,(0# :: Happy_GHC_Exts.Int#)) -> {- nothing -} (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = (Happy_GHC_Exts.I# ((Happy_GHC_Exts.negateInt# ((n Happy_GHC_Exts.+# (1# :: Happy_GHC_Exts.Int#)))))) n -> {- nothing -} happyShift new_state i tk st where new_state = (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = (off Happy_GHC_Exts.+# i) check = if GTE(off_i,(0# :: Happy_GHC_Exts.Int#)) then EQ(indexShortOffAddr happyCheck off_i, i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# {-# INLINE happyLt #-} happyLt x y = LT(x,y) readArrayBit arr bit = Bits.testBit (Happy_GHC_Exts.I# (indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#))) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- HappyState data type (not arrays) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state 0# tk st sts stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "shifting the error token" $ happyDoAction i tk new_state (HappyCons (st) (sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons (st) (sts)) ((happyInTok (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_0 nt fn j tk st@((action)) sts stk = happyGoto nt j tk st (HappyCons (st) (sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_1 nt fn j tk _ sts@((HappyCons (st@(action)) (_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_2 nt fn j tk _ (HappyCons (_) (sts@((HappyCons (st@(action)) (_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_3 nt fn j tk _ (HappyCons (_) ((HappyCons (_) (sts@((HappyCons (st@(action)) (_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) sts of sts1@((HappyCons (st1@(action)) (_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons (_) (t)) = happyDrop (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Happy_GHC_Exts.-# (1#::Happy_GHC_Exts.Int#)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = {- nothing -} happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist 0# tk old_st _ stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (action) sts stk = -- trace "entering error recovery" $ happyDoAction 0# tk action sts ((Happy_GHC_Exts.unsafeCoerce# (Happy_GHC_Exts.I# (i))) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/src/First.lhs0000644000000000000000000000333007346545000013631 0ustar0000000000000000----------------------------------------------------------------------------- Implementation of FIRST (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module First ( mkFirst ) where > import GenUtils > import NameSet ( NameSet ) > import qualified NameSet as Set > import Grammar > import Data.IntSet (IntSet) \subsection{Utilities} > joinSymSets :: (a -> NameSet) -> [a] -> NameSet > joinSymSets f = foldr go (Set.singleton epsilonTok) . map f > where > go h b > | Set.member epsilonTok h = Set.delete epsilonTok h `Set.union` b > | otherwise = h \subsection{Implementation of FIRST} > mkFirst :: Grammar -> [Name] -> NameSet > mkFirst (Grammar { first_term = fst_term > , lookupProdNo = prodNo > , lookupProdsOfName = prodsOfName > , non_terminals = nts > }) > = joinSymSets (\ h -> maybe (Set.singleton h) id (lookup h env) ) > where > env = mkClosure (==) (getNext fst_term prodNo prodsOfName) > [ (name,Set.empty) | name <- nts ] > getNext :: Name -> (a -> Production) -> (Name -> [a]) > -> [(Name, IntSet)] -> [(Name, NameSet)] > getNext fst_term prodNo prodsOfName env = > [ (nm, next nm) | (nm,_) <- env ] > where > fn t | t == errorTok || t >= fst_term = Set.singleton t > fn x = maybe (error "attempted FIRST(e) :-(") id (lookup x env) > next :: Name -> NameSet > next t | t >= fst_term = Set.singleton t > next n = Set.unions > [ joinSymSets fn lhs > | rl <- prodsOfName n > , let Production _ lhs _ _ = prodNo rl ] happy-1.20.1.1/src/GenUtils.lhs0000644000000000000000000000600207346545000014273 0ustar0000000000000000----------------------------------------------------------------------------- Some General Utilities, including sorts, etc. This is realy just an extended prelude. All the code below is understood to be in the public domain. ----------------------------------------------------------------------------- > module GenUtils ( > mkClosure, > combinePairs, > mapDollarDollar, > str, char, nl, brack, brack', > interleave, interleave', > strspace, maybestr > ) where > import Data.Char (isAlphaNum) > import Data.Ord (comparing) > import Data.List %------------------------------------------------------------------------------ @mkClosure@ makes a closure, when given a comparison and iteration loop. Be careful, because if the functional always makes the object different, This will never terminate. > mkClosure :: (a -> a -> Bool) -> (a -> a) -> a -> a > mkClosure eq f = match . iterate f > where > match (a:b:_) | a `eq` b = a > match (_:c) = match c > match [] = error "Can't happen: match []" Gofer-like stuff: > combinePairs :: (Ord a) => [(a,b)] -> [(a,[b])] > combinePairs xs = > combine [ (a,[b]) | (a,b) <- sortBy (comparing fst) xs] > where > combine [] = [] > combine ((a,b):(c,d):r) | a == c = combine ((a,b++d) : r) > combine (a:r) = a : combine r > Replace $$ with an arbitrary string, being careful to avoid ".." and '.'. > mapDollarDollar :: String -> Maybe (String -> String) > mapDollarDollar code0 = go code0 "" > where go code acc = > case code of > [] -> Nothing > > '"' :r -> case reads code :: [(String,String)] of > [] -> go r ('"':acc) > (s,r'):_ -> go r' (reverse (show s) ++ acc) > a:'\'' :r | isAlphaNum a -> go r ('\'':a:acc) > '\'' :r -> case reads code :: [(Char,String)] of > [] -> go r ('\'':acc) > (c,r'):_ -> go r' (reverse (show c) ++ acc) > '\\':'$':r -> go r ('$':acc) > '$':'$':r -> Just (\repl -> reverse acc ++ repl ++ r) > c:r -> go r (c:acc) %------------------------------------------------------------------------------- Fast string-building functions. > str :: String -> String -> String > str = showString > char :: Char -> String -> String > char c = (c :) > interleave :: String -> [String -> String] -> String -> String > interleave s = foldr (\a b -> a . str s . b) id > interleave' :: String -> [String -> String] -> String -> String > interleave' s = foldr1 (\a b -> a . str s . b) > strspace :: String -> String > strspace = char ' ' > nl :: String -> String > nl = char '\n' > maybestr :: Maybe String -> String -> String > maybestr (Just s) = str s > maybestr _ = id > brack :: String -> String -> String > brack s = str ('(' : s) . char ')' > brack' :: (String -> String) -> String -> String > brack' s = char '(' . s . char ')' happy-1.20.1.1/src/Grammar.lhs0000644000000000000000000005621207346545000014137 0ustar0000000000000000----------------------------------------------------------------------------- The Grammar data type. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- Here is our mid-section datatype > module Grammar ( > Name, > > Production(..), Grammar(..), mangler, ErrorHandlerType(..), > > LRAction(..), ActionTable, Goto(..), GotoTable, Priority(..), > Assoc(..), > > errorName, errorTok, startName, firstStartTok, dummyTok, > eofName, epsilonTok > ) where > import GenUtils > import AbsSyn > import ParseMonad > import AttrGrammar > import AttrGrammarParser > import ParamRules > import Data.Array > import Data.Char > import Data.List > import Data.Maybe (fromMaybe) > import Control.Monad > import Control.Monad.Writer (Writer, mapWriter, runWriter, tell) > type Name = Int > data Production > = Production Name [Name] (String,[Int]) Priority #ifdef DEBUG > deriving Show #endif > data Grammar > = Grammar { > productions :: [Production], > lookupProdNo :: Int -> Production, > lookupProdsOfName :: Name -> [Int], > token_specs :: [(Name,String)], > terminals :: [Name], > non_terminals :: [Name], > starts :: [(String,Name,Name,Bool)], > types :: Array Int (Maybe String), > token_names :: Array Int String, > first_nonterm :: Name, > first_term :: Name, > eof_term :: Name, > priorities :: [(Name,Priority)], > token_type :: String, > imported_identity :: Bool, > monad :: (Bool,String,String,String,String), > expect :: Maybe Int, > attributes :: [(String,String)], > attributetype :: String, > lexer :: Maybe (String,String), > error_handler :: Maybe String, > error_sig :: ErrorHandlerType > } #ifdef DEBUG > instance Show Grammar where > showsPrec _ (Grammar > { productions = p > , token_specs = t > , terminals = ts > , non_terminals = nts > , starts = sts > , types = tys > , token_names = e > , first_nonterm = fnt > , first_term = ft > , eof_term = eof > }) > = showString "productions = " . shows p > . showString "\ntoken_specs = " . shows t > . showString "\nterminals = " . shows ts > . showString "\nnonterminals = " . shows nts > . showString "\nstarts = " . shows sts > . showString "\ntypes = " . shows tys > . showString "\ntoken_names = " . shows e > . showString "\nfirst_nonterm = " . shows fnt > . showString "\nfirst_term = " . shows ft > . showString "\neof = " . shows eof > . showString "\n" #endif > data Assoc = LeftAssoc | RightAssoc | None #ifdef DEBUG > deriving Show #endif > data Priority = No | Prio Assoc Int | PrioLowest #ifdef DEBUG > deriving Show #endif > instance Eq Priority where > No == No = True > Prio _ i == Prio _ j = i == j > _ == _ = False > mkPrio :: Int -> Directive a -> Priority > mkPrio i (TokenNonassoc _) = Prio None i > mkPrio i (TokenRight _) = Prio RightAssoc i > mkPrio i (TokenLeft _) = Prio LeftAssoc i > mkPrio _ _ = error "Panic: impossible case in mkPrio" ----------------------------------------------------------------------------- -- Magic name values All the tokens in the grammar are mapped onto integers, for speed. The namespace is broken up as follows: epsilon = 0 error = 1 dummy = 2 %start = 3..s non-terminals = s..n terminals = n..m %eof = m These numbers are deeply magical, change at your own risk. Several other places rely on these being arranged as they are, including ProduceCode.lhs and the various HappyTemplates. Unfortunately this means you can't tell whether a given token is a terminal or non-terminal without knowing the boundaries of the namespace, which are kept in the Grammar structure. In hindsight, this was probably a bad idea. > startName, eofName, errorName, dummyName :: String > startName = "%start" -- with a suffix, like %start_1, %start_2 etc. > eofName = "%eof" > errorName = "error" > dummyName = "%dummy" -- shouldn't occur in the grammar anywhere > firstStartTok, dummyTok, errorTok, epsilonTok :: Name > firstStartTok = 3 > dummyTok = 2 > errorTok = 1 > epsilonTok = 0 ----------------------------------------------------------------------------- -- The Mangler This bit is a real mess, mainly because of the error message support. > type ErrMsg = String > type M a = Writer [ErrMsg] a > addErr :: ErrMsg -> M () > addErr e = tell [e] > mangler :: FilePath -> AbsSyn -> Either [ErrMsg] Grammar > mangler file abssyn > | null errs = Right g > | otherwise = Left errs > where (g, errs) = runWriter (manglerM file abssyn) > manglerM :: FilePath -> AbsSyn -> M Grammar > manglerM file (AbsSyn _hd dirs rules' _tl) = > -- add filename to all error messages > mapWriter (\(a,e) -> (a, map (\s -> file ++ ": " ++ s) e)) $ do > rules <- case expand_rules rules' of > Left err -> addErr err >> return [] > Right as -> return as > nonterm_strs <- checkRules [n | Rule1 n _ _ <- rules] "" [] > let > terminal_strs = concat (map getTerm dirs) ++ [eofName] > n_starts = length starts' > n_nts = length nonterm_strs > n_ts = length terminal_strs > first_nt = firstStartTok + n_starts > first_t = first_nt + n_nts > last_start = first_nt - 1 > last_nt = first_t - 1 > last_t = first_t + n_ts - 1 > start_names = [ firstStartTok .. last_start ] > nonterm_names = [ first_nt .. last_nt ] > terminal_names = [ first_t .. last_t ] > starts' = case getParserNames dirs of > [] -> [TokenName "happyParse" Nothing False] > ns -> ns > > start_strs = [ startName++'_':p | (TokenName p _ _) <- starts' ] Build up a mapping from name values to strings. > name_env = (errorTok, errorName) : > (dummyTok, dummyName) : > zip start_names start_strs ++ > zip nonterm_names nonterm_strs ++ > zip terminal_names terminal_strs > lookupName :: String -> [Name] > lookupName n = [ t | (t,r) <- name_env, r == n ] > mapToName str' = > case lookupName str' of > [a] -> return a > [] -> do addErr ("unknown identifier '" ++ str' ++ "'") > return errorTok > (a:_) -> do addErr ("multiple use of '" ++ str' ++ "'") > return a Start symbols... > -- default start token is the first non-terminal in the grammar > lookupStart (TokenName _ Nothing _) = return first_nt > lookupStart (TokenName _ (Just n) _) = mapToName n > lookupStart _ = error "lookupStart: Not a TokenName" > -- in > start_toks <- mapM lookupStart starts' > let > parser_names = [ s | TokenName s _ _ <- starts' ] > start_partials = [ b | TokenName _ _ b <- starts' ] > start_prods = zipWith (\nm tok -> Production nm [tok] ("no code",[]) No) > start_names start_toks Deal with priorities... > priodir = zip [1..] (getPrios dirs) > > prios = [ (name,mkPrio i dir) > | (i,dir) <- priodir > , nm <- AbsSyn.getPrioNames dir > , name <- lookupName nm > ] > prioByString = [ (name, mkPrio i dir) > | (i,dir) <- priodir > , name <- AbsSyn.getPrioNames dir > ] Translate the rules from string to name-based. > convNT (Rule1 nt prods ty) > = do nt' <- mapToName nt > return (nt', prods, ty) > > attrs = getAttributes dirs > attrType = fromMaybe "HappyAttrs" (getAttributetype dirs) > > transRule (nt, prods, _ty) > = mapM (finishRule nt) prods > > finishRule :: Name -> Prod1 -> Writer [ErrMsg] Production > finishRule nt (Prod1 lhs code line prec) > = mapWriter (\(a,e) -> (a, map (addLine line) e)) $ do > lhs' <- mapM mapToName lhs > code' <- checkCode (length lhs) lhs' nonterm_names code attrs > case mkPrec lhs' prec of > Left s -> do addErr ("Undeclared precedence token: " ++ s) > return (Production nt lhs' code' No) > Right p -> return (Production nt lhs' code' p) > > mkPrec :: [Name] -> Prec -> Either String Priority > mkPrec lhs PrecNone = > case filter (flip elem terminal_names) lhs of > [] -> Right No > xs -> case lookup (last xs) prios of > Nothing -> Right No > Just p -> Right p > mkPrec _ (PrecId s) = > case lookup s prioByString of > Nothing -> Left s > Just p -> Right p > > mkPrec _ PrecShift = Right PrioLowest > > -- in > rules1 <- mapM convNT rules > rules2 <- mapM transRule rules1 > let > type_env = [(nt, t) | Rule1 nt _ (Just (t,[])) <- rules] ++ > [(nt, getTokenType dirs) | nt <- terminal_strs] -- XXX: Doesn't handle $$ type! > > fixType (ty,s) = go "" ty > where go acc [] = return (reverse acc) > go acc (c:r) | isLower c = -- look for a run of alphanumerics starting with a lower case letter > let (cs,r1) = span isAlphaNum r > go1 x = go (reverse x ++ acc) r1 > in case lookup (c:cs) s of > Nothing -> go1 (c:cs) -- no binding found > Just a -> case lookup a type_env of > Nothing -> do > addErr ("Parameterized rule argument '" ++ a ++ "' does not have type") > go1 (c:cs) > Just t -> go1 $ "(" ++ t ++ ")" > | otherwise = go (c:acc) r > > convType (nm, t) > = do t' <- fixType t > return (nm, t') > > -- in > tys <- mapM convType [ (nm, t) | (nm, _, Just t) <- rules1 ] > > let > type_array :: Array Int (Maybe String) > type_array = accumArray (\_ x -> x) Nothing (first_nt, last_nt) > [ (nm, Just t) | (nm, t) <- tys ] > env_array :: Array Int String > env_array = array (errorTok, last_t) name_env > -- in Get the token specs in terms of Names. > let > fixTokenSpec (a,b) = do n <- mapToName a; return (n,b) > -- in > tokspec <- mapM fixTokenSpec (getTokenSpec dirs) > let > ass = combinePairs [ (a,no) > | (Production a _ _ _,no) <- zip productions' [0..] ] > arr = array (firstStartTok, length ass - 1 + firstStartTok) ass > lookup_prods :: Name -> [Int] > lookup_prods x | x >= firstStartTok && x < first_t = arr ! x > lookup_prods _ = error "lookup_prods" > > productions' = start_prods ++ concat rules2 > prod_array = listArray (0,length productions' - 1) productions' > -- in > return (Grammar { > productions = productions', > lookupProdNo = (prod_array !), > lookupProdsOfName = lookup_prods, > token_specs = tokspec, > terminals = errorTok : terminal_names, > non_terminals = start_names ++ nonterm_names, > -- INCLUDES the %start tokens > starts = zip4 parser_names start_names start_toks > start_partials, > types = type_array, > token_names = env_array, > first_nonterm = first_nt, > first_term = first_t, > eof_term = last terminal_names, > priorities = prios, > imported_identity = getImportedIdentity dirs, > monad = getMonad dirs, > lexer = getLexer dirs, > error_handler = getError dirs, > error_sig = getErrorHandlerType dirs, > token_type = getTokenType dirs, > expect = getExpect dirs, > attributes = attrs, > attributetype = attrType > }) For combining actions with possible error messages. > addLine :: Int -> String -> String > addLine l s = show l ++ ": " ++ s > getTerm :: Directive a -> [a] > getTerm (TokenSpec stuff) = map fst stuff > getTerm _ = [] So is this. > checkRules :: [String] -> String -> [String] -> Writer [ErrMsg] [String] > checkRules (name:rest) above nonterms > | name == above = checkRules rest name nonterms > | name `elem` nonterms > = do addErr ("Multiple rules for '" ++ name ++ "'") > checkRules rest name nonterms > | otherwise = checkRules rest name (name : nonterms) > checkRules [] _ nonterms = return (reverse nonterms) ----------------------------------------------------------------------------- -- If any attribute directives were used, we are in an attribute grammar, so -- go do special processing. If not, pass on to the regular processing routine > checkCode :: Int -> [Name] -> [Name] -> String -> [(String,String)] -> M (String,[Int]) > checkCode arity _ _ code [] = doCheckCode arity code > checkCode arity lhs nonterm_names code attrs = rewriteAttributeGrammar arity lhs nonterm_names code attrs ------------------------------------------------------------------------------ -- Special processing for attribute grammars. We re-parse the body of the code -- block and output the nasty-looking record manipulation and let binding goop -- > rewriteAttributeGrammar :: Int -> [Name] -> [Name] -> String -> [(String,String)] -> M (String,[Int]) > rewriteAttributeGrammar arity lhs nonterm_names code attrs = first we need to parse the body of the code block > case runP agParser code 0 of > Left msg -> do addErr ("error in attribute grammar rules: "++msg) > return ("",[]) > Right rules -> now we break the rules into three lists, one for synthesized attributes, one for inherited attributes, and one for conditionals > let (selfRules,subRules,conditions) = partitionRules [] [] [] rules > attrNames = map fst attrs > defaultAttr = head attrNames now check that $i references are in range > in do let prods = mentionedProductions rules > mapM_ checkArity prods and output the rules > rulesStr <- formatRules arity attrNames defaultAttr > allSubProductions selfRules > subRules conditions return the munged code body and all sub-productions mentioned > return (rulesStr,nub (allSubProductions++prods)) > where partitionRules a b c [] = (a,b,c) > partitionRules a b c (RightmostAssign attr toks : xs) = partitionRules a (SubAssign (arity,attr) toks : b) c xs > partitionRules a b c (x@(SelfAssign _ _ ) : xs) = partitionRules (x:a) b c xs > partitionRules a b c (x@(SubAssign _ _) : xs) = partitionRules a (x:b) c xs > partitionRules a b c (x@(Conditional _) : xs) = partitionRules a b (x:c) xs > allSubProductions = map (+1) (findIndices (`elem` nonterm_names) lhs) > mentionedProductions rules = [ i | (AgTok_SubRef (i,_)) <- concat (map getTokens rules) ] > getTokens (SelfAssign _ toks) = toks > getTokens (SubAssign _ toks) = toks > getTokens (Conditional toks) = toks > getTokens (RightmostAssign _ toks) = toks > > checkArity x = when (x > arity) $ addErr (show x++" out of range") ------------------------------------------------------------------------------------ -- Actually emit the code for the record bindings and conditionals -- > formatRules :: Int -> [String] -> String -> [Name] > -> [AgRule] -> [AgRule] -> [AgRule] > -> M String > formatRules arity _attrNames defaultAttr prods selfRules subRules conditions = return $ > concat [ "\\happyInhAttrs -> let { " > , "happySelfAttrs = happyInhAttrs",formattedSelfRules > , subProductionRules > , "; happyConditions = ", formattedConditions > , " } in (happyConditions,happySelfAttrs)" > ] > > where formattedSelfRules = case selfRules of [] -> []; _ -> "{ "++formattedSelfRules'++" }" > formattedSelfRules' = concat $ intersperse ", " $ map formatSelfRule selfRules > formatSelfRule (SelfAssign [] toks) = defaultAttr++" = "++(formatTokens toks) > formatSelfRule (SelfAssign attr toks) = attr++" = "++(formatTokens toks) > formatSelfRule _ = error "formatSelfRule: Not a self rule" > subRulesMap :: [(Int,[(String,[AgToken])])] > subRulesMap = map (\l -> foldr (\ (_,x) (i,xs) -> (i,x:xs)) > (fst $ head l,[snd $ head l]) > (tail l) ) . > groupBy (\x y -> (fst x) == (fst y)) . > sortBy (\x y -> compare (fst x) (fst y)) . > map (\(SubAssign (i,ident) toks) -> (i,(ident,toks))) $ subRules > subProductionRules = concat $ map formatSubRules prods > formatSubRules i = > let attrs = fromMaybe [] . lookup i $ subRulesMap > attrUpdates' = concat $ intersperse ", " $ map (formatSubRule i) attrs > attrUpdates = case attrUpdates' of [] -> []; x -> "{ "++x++" }" > in concat ["; (happyConditions_",show i,",happySubAttrs_",show i,") = ",mkHappyVar i > ," happyEmptyAttrs" > , attrUpdates > ] > > formattedConditions = concat $ intersperse " Prelude.++ " $ localConditions : (map (\i -> "happyConditions_"++(show i)) prods) > localConditions = "["++(concat $ intersperse ", " $ map formatCondition conditions)++"]" > formatCondition (Conditional toks) = formatTokens toks > formatCondition _ = error "formatCondition: Not a condition" > formatSubRule _ ([],toks) = defaultAttr++" = "++(formatTokens toks) > formatSubRule _ (attr,toks) = attr++" = "++(formatTokens toks) > formatTokens tokens = concat (map formatToken tokens) > formatToken AgTok_LBrace = "{ " > formatToken AgTok_RBrace = "} " > formatToken AgTok_Where = "where " > formatToken AgTok_Semicolon = "; " > formatToken AgTok_Eq = "=" > formatToken (AgTok_SelfRef []) = "("++defaultAttr++" happySelfAttrs) " > formatToken (AgTok_SelfRef x) = "("++x++" happySelfAttrs) " > formatToken (AgTok_RightmostRef x) = formatToken (AgTok_SubRef (arity,x)) > formatToken (AgTok_SubRef (i,[])) > | i `elem` prods = "("++defaultAttr++" happySubAttrs_"++(show i)++") " > | otherwise = mkHappyVar i ++ " " > formatToken (AgTok_SubRef (i,x)) > | i `elem` prods = "("++x++" happySubAttrs_"++(show i)++") " > | otherwise = error ("lhs "++(show i)++" is not a non-terminal") > formatToken (AgTok_Unknown x) = x++" " > formatToken AgTok_EOF = error "formatToken AgTok_EOF" ----------------------------------------------------------------------------- -- Check for every $i that i is <= the arity of the rule. -- At the same time, we collect a list of the variables actually used in this -- code, which is used by the backend. > doCheckCode :: Int -> String -> M (String, [Int]) > doCheckCode arity code0 = go code0 "" [] > where go code acc used = > case code of > [] -> return (reverse acc, used) > > '"' :r -> case reads code :: [(String,String)] of > [] -> go r ('"':acc) used > (s,r'):_ -> go r' (reverse (show s) ++ acc) used > a:'\'' :r | isAlphaNum a -> go r ('\'':a:acc) used > '\'' :r -> case reads code :: [(Char,String)] of > [] -> go r ('\'':acc) used > (c,r'):_ -> go r' (reverse (show c) ++ acc) used > '\\':'$':r -> go r ('$':acc) used > > '$':'>':r -- the "rightmost token" > | arity == 0 -> do addErr "$> in empty rule" > go r acc used > | otherwise -> go r (reverse (mkHappyVar arity) ++ acc) > (arity : used) > > '$':r@(i:_) | isDigit i -> > case reads r :: [(Int,String)] of > (j,r'):_ -> > if j > arity > then do addErr ('$': show j ++ " out of range") > go r' acc used > else go r' (reverse (mkHappyVar j) ++ acc) > (j : used) > [] -> error "doCheckCode []" > c:r -> go r (c:acc) used > mkHappyVar :: Int -> String > mkHappyVar n = "happy_var_" ++ show n ----------------------------------------------------------------------------- -- Internal Reduction Datatypes > data LRAction = LR'Shift Int Priority -- state number and priority > | LR'Reduce Int Priority-- rule no and priority > | LR'Accept -- :-) > | LR'Fail -- :-( > | LR'MustFail -- :-( > | LR'Multiple [LRAction] LRAction -- conflict > deriving(Eq #ifdef DEBUG > ,Show #endif > ) > type ActionTable = Array Int{-state-} (Array Int{-terminal#-} LRAction) instance Text LRAction where showsPrec _ (LR'Shift i _) = showString ("s" ++ show i) showsPrec _ (LR'Reduce i _) = showString ("r" ++ show i) showsPrec _ (LR'Accept) = showString ("acc") showsPrec _ (LR'Fail) = showString (" ") instance Eq LRAction where { (==) = primGenericEq } > data Goto = Goto Int | NoGoto > deriving(Eq #ifdef DEBUG > ,Show #endif > ) > type GotoTable = Array Int{-state-} (Array Int{-nonterminal #-} Goto) happy-1.20.1.1/src/Info.lhs0000644000000000000000000001610507346545000013441 0ustar0000000000000000----------------------------------------------------------------------------- Generating info files. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module Info (genInfoFile) where > import Paths_happy ( version ) > import LALR ( Lr0Item(..) ) > import GenUtils ( str, interleave, interleave' ) > import Data.Set ( Set ) > import qualified Data.Set as Set hiding ( Set ) > import Grammar > import Data.Array > import Data.List (nub) > import Data.Version ( showVersion ) Produce a file of parser information, useful for debugging the parser. > genInfoFile > :: [Set Lr0Item] > -> Grammar > -> ActionTable > -> GotoTable > -> [(Int,String)] > -> Array Int (Int,Int) > -> String > -> [Int] -- unused rules > -> [String] -- unused terminals > -> String > genInfoFile items > (Grammar { productions = prods > , lookupProdNo = lookupProd > , lookupProdsOfName = lookupProdNos > , non_terminals = nonterms > , token_names = env > }) > action goto tokens conflictArray filename unused_rules unused_terminals > = (showHeader > . showConflicts > . showUnused > . showProductions > . showTerminals > . showNonTerminals > . showStates > . showStats > ) "" > where > showHeader > = banner ("Info file generated by Happy Version " ++ > showVersion version ++ " from " ++ filename) > showConflicts > = str "\n" > . foldr (.) id (map showConflictsState (assocs conflictArray)) > . str "\n" > showConflictsState (_, (0,0)) = id > showConflictsState (state, (sr,rr)) > = str "state " > . shows state > . str " contains " > . interleave' " and " ( > (if sr /= 0 > then [ shows sr . str " shift/reduce conflicts" ] > else []) ++ > if rr /= 0 > then [ shows rr . str " reduce/reduce conflicts" ] > else []) > . str ".\n" > showUnused = > (case unused_rules of > [] -> id > _ -> interleave "\n" ( > map (\r -> str "rule " > . shows r > . str " is unused") > unused_rules) > . str "\n") > . (case unused_terminals of > [] -> id > _ -> interleave "\n" ( > map (\t -> str "terminal " > . str t > . str " is unused") > unused_terminals) > . str "\n") > showProductions = > banner "Grammar" > . interleave "\n" (zipWith showProduction prods [ 0 :: Int .. ]) > . str "\n" > showProduction (Production nt toks _sem _prec) i > = ljuststr 50 ( > str "\t" > . showName nt > . str " -> " > . interleave " " (map showName toks)) > . str " (" . shows i . str ")" > showStates = > banner "States" > . interleave "\n" (zipWith showState > (map Set.toAscList items) [ 0 :: Int .. ]) > showState state n > = str "State ". shows n > . str "\n\n" > . interleave "\n" (map showItem selectedItems) > . str "\n" > . foldr (.) id (map showAction (assocs (action ! n))) > . str "\n" > . foldr (.) id (map showGoto (assocs (goto ! n))) > where > nonRuleItems = [ (Lr0 r d) | (Lr0 r d) <- state, d /= 0 ] > selectedItems = if null nonRuleItems then take 1 state else nonRuleItems > -- andreasabel, 2019-11-12, issue #161: > -- Items that start with a dot (@d == 0@) are usually added by completion > -- and thus redundant and dropped from the printout (@nonRuleItems@). > -- However, if the initial item started with a dot, it should not be dropped, > -- otherwise there will be no items left. Thus, should there be no items > -- not starting with a dot, we print the initial item. > showItem (Lr0 rule dot) > = ljuststr 50 ( > str "\t" > . showName nt > . str " -> " > . interleave " " (map showName beforeDot) > . str ". " > . interleave " " (map showName afterDot)) > . str " (rule " . shows rule . str ")" > where > Production nt toks _sem _prec = lookupProd rule > (beforeDot, afterDot) = splitAt dot toks > showAction (_, LR'Fail) > = id > showAction (t, act) > = str "\t" > . showJName 15 t > . showAction' act > . str "\n" > showAction' LR'MustFail > = str "fail" > showAction' (LR'Shift n _) > = str "shift, and enter state " > . shows n > showAction' LR'Accept > = str "accept" > showAction' (LR'Reduce n _) > = str "reduce using rule " > . shows n > showAction' (LR'Multiple as a) > = showAction' a > . str "\n" > . interleave "\n" > (map (\a' -> str "\t\t\t(" . showAction' a' . str ")") > (nub (filter (/= a) as))) > showAction' LR'Fail = error "showAction' LR'Fail: Unhandled case" > showGoto (_, NoGoto) > = id > showGoto (nt, Goto n) > = str "\t" > . showJName 15 nt > . str "goto state " > . shows n > . str "\n" > showTerminals > = banner "Terminals" > . interleave "\n" (map showTerminal tokens) > . str "\n" > showTerminal (t,s) > = str "\t" > . showJName 15 t > . str "{ " . str s . str " }" > showNonTerminals > = banner "Non-terminals" > . interleave "\n" (map showNonTerminal nonterms) > . str "\n" > showNonTerminal nt > = str "\t" > . showJName 15 nt > . (if (length nt_rules == 1) > then str " rule " > else str " rules ") > . foldr1 (\a b -> a . str ", " . b) nt_rules > where nt_rules = map shows (lookupProdNos nt) > showStats > = banner "Grammar Totals" > . str "Number of rules: " . shows (length prods) > . str "\nNumber of terminals: " . shows (length tokens) > . str "\nNumber of non-terminals: " . shows (length nonterms) > . str "\nNumber of states: " . shows (length items) > . str "\n" > nameOf n = env ! n > showName = str . nameOf > showJName j = str . ljustify j . nameOf > ljustify :: Int -> String -> String > ljustify n s = s ++ replicate (max 0 (n - length s)) ' ' > ljuststr :: Int -> (String -> String) -> String -> String > ljuststr n s = str (ljustify n (s "")) > banner :: String -> String -> String > banner s > = str "-----------------------------------------------------------------------------\n" > . str s > . str "\n-----------------------------------------------------------------------------\n" happy-1.20.1.1/src/LALR.lhs0000644000000000000000000006046107346545000013304 0ustar0000000000000000----------------------------------------------------------------------------- Generation of LALR parsing tables. (c) 1993-1996 Andy Gill, Simon Marlow (c) 1997-2001 Simon Marlow ----------------------------------------------------------------------------- > module LALR > (genActionTable, genGotoTable, genLR0items, precalcClosure0, > propLookaheads, calcLookaheads, mergeLookaheadInfo, countConflicts, > Lr0Item(..), Lr1Item) > where > import GenUtils > import Data.Set ( Set ) > import qualified Data.Set as Set hiding ( Set ) > import qualified NameSet > import NameSet ( NameSet ) > import Grammar > import Control.Monad (guard) > import Control.Monad.ST > import Data.Array.ST > import Data.Array as Array > import Data.List (nub,foldl',groupBy,sortBy) > import Data.Function (on) > import Data.Maybe (listToMaybe, maybeToList) > unionMap :: (Ord b) => (a -> Set b) -> Set a -> Set b > unionMap f = Set.foldr (Set.union . f) Set.empty > unionNameMap :: (Name -> NameSet) -> NameSet -> NameSet > unionNameMap f = NameSet.foldr (NameSet.union . f) NameSet.empty This means rule $a$, with dot at $b$ (all starting at 0) > data Lr0Item = Lr0 {-#UNPACK#-}!Int {-#UNPACK#-}!Int -- (rule, dot) > deriving (Eq,Ord #ifdef DEBUG > ,Show #endif > ) > data Lr1Item = Lr1 {-#UNPACK#-}!Int {-#UNPACK#-}!Int NameSet -- (rule, dot, lookahead) #ifdef DEBUG > deriving (Show) #endif > type RuleList = [Lr0Item] ----------------------------------------------------------------------------- Generating the closure of a set of LR(0) items Precalculate the rule closure for each non-terminal in the grammar, using a memo table so that no work is repeated. > precalcClosure0 :: Grammar -> Name -> RuleList > precalcClosure0 g = > \n -> maybe [] id (lookup n info') > where > > info' :: [(Name, RuleList)] > info' = map (\(n,rules) -> (n,map (\rule -> Lr0 rule 0) (NameSet.toAscList rules))) info > info :: [(Name, NameSet)] > info = mkClosure (==) (\f -> map (follow f) f) > (map (\nt -> (nt,NameSet.fromList (lookupProdsOfName g nt))) nts) > follow :: [(Name, NameSet)] -> (Name, NameSet) -> (Name, NameSet) > follow f (nt,rules) = (nt, unionNameMap (followNT f) rules `NameSet.union` rules) > followNT :: [(Name, NameSet)] -> Int -> NameSet > followNT f rule = > case findRule g rule 0 of > Just nt | nt >= firstStartTok && nt < fst_term -> > maybe (error "followNT") id (lookup nt f) > _ -> NameSet.empty > nts = non_terminals g > fst_term = first_term g > closure0 :: Grammar -> (Name -> RuleList) -> Set Lr0Item -> Set Lr0Item > closure0 g closureOfNT set = Set.foldr addRules Set.empty set > where > fst_term = first_term g > addRules rule set' = Set.union (Set.fromList (rule : closureOfRule rule)) set' > > closureOfRule (Lr0 rule dot) = > case findRule g rule dot of > (Just nt) | nt >= firstStartTok && nt < fst_term > -> closureOfNT nt > _ -> [] ----------------------------------------------------------------------------- Generating the closure of a set of LR(1) items > closure1 :: Grammar -> ([Name] -> NameSet) -> [Lr1Item] -> [Lr1Item] > closure1 g first set > = fst (mkClosure (\(_,new) _ -> null new) addItems ([],set)) > where > fst_term = first_term g > addItems :: ([Lr1Item],[Lr1Item]) -> ([Lr1Item],[Lr1Item]) > addItems (old_items, new_items) = (new_old_items, new_new_items) > where > new_old_items = new_items `union_items` old_items > new_new_items = subtract_items > (foldr union_items [] (map fn new_items)) > new_old_items > fn :: Lr1Item -> [Lr1Item] > fn (Lr1 rule dot as) = case drop dot lhs of > (b:beta) | b >= firstStartTok && b < fst_term -> > let terms = unionNameMap > (\a -> first (beta ++ [a])) as > in > [ (Lr1 rule' 0 terms) | rule' <- lookupProdsOfName g b ] > _ -> [] > where Production _name lhs _ _ = lookupProdNo g rule Subtract the first set of items from the second. > subtract_items :: [Lr1Item] -> [Lr1Item] -> [Lr1Item] > subtract_items items1 items2 = foldr (subtract_item items2) [] items1 These utilities over item sets are crucial to performance. Stamp on overloading with judicious use of type signatures... > subtract_item :: [Lr1Item] -> Lr1Item -> [Lr1Item] -> [Lr1Item] > subtract_item [] i result = i : result > subtract_item ((Lr1 rule dot as):items) i@(Lr1 rule' dot' as') result = > case compare rule' rule of > LT -> i : result > GT -> carry_on > EQ -> case compare dot' dot of > LT -> i : result > GT -> carry_on > EQ -> case NameSet.difference as' as of > bs | NameSet.null bs -> result > | otherwise -> (Lr1 rule dot bs) : result > where > carry_on = subtract_item items i result Union two sets of items. > union_items :: [Lr1Item] -> [Lr1Item] -> [Lr1Item] > union_items is [] = is > union_items [] is = is > union_items (i@(Lr1 rule dot as):is) (i'@(Lr1 rule' dot' as'):is') = > case compare rule rule' of > LT -> drop_i > GT -> drop_i' > EQ -> case compare dot dot' of > LT -> drop_i > GT -> drop_i' > EQ -> (Lr1 rule dot (as `NameSet.union` as')) : union_items is is' > where > drop_i = i : union_items is (i':is') > drop_i' = i' : union_items (i:is) is' ----------------------------------------------------------------------------- goto(I,X) function The input should be the closure of a set of kernel items I together with a token X (terminal or non-terminal. Output will be the set of kernel items for the set of items goto(I,X) > gotoClosure :: Grammar -> Set Lr0Item -> Name -> Set Lr0Item > gotoClosure gram i x = unionMap fn i > where > fn (Lr0 rule_no dot) = > case findRule gram rule_no dot of > Just t | x == t -> Set.singleton (Lr0 rule_no (dot+1)) > _ -> Set.empty ----------------------------------------------------------------------------- Generating LR0 Item sets The item sets are generated in much the same way as we find the closure of a set of items: we use two sets, those which have already generated more sets, and those which have just been generated. We keep iterating until the second set is empty. The addItems function is complicated by the fact that we need to keep information about which sets were generated by which others. > type ItemSetWithGotos = (Set Lr0Item, [(Name,Int)]) > genLR0items :: Grammar -> (Name -> RuleList) -> [ItemSetWithGotos] > genLR0items g precalcClosures > = fst (mkClosure (\(_,new) _ -> null new) > addItems > (([],startRules))) > where > n_starts = length (starts g) > startRules :: [Set Lr0Item] > startRules = [ Set.singleton (Lr0 rule 0) | rule <- [0..n_starts] ] > tokens = non_terminals g ++ terminals g > addItems :: ([ItemSetWithGotos], [Set Lr0Item]) > -> ([ItemSetWithGotos], [Set Lr0Item]) > > addItems (oldSets,newSets) = (newOldSets, reverse newNewSets) > where > > newOldSets = oldSets ++ (zip newSets intgotos) > itemSets = map fst oldSets ++ newSets First thing to do is for each set in I in newSets, generate goto(I,X) for each token (terminals and nonterminals) X. > gotos :: [[(Name,Set Lr0Item)]] > gotos = map (filter (not . Set.null . snd)) > (map (\i -> let i' = closure0 g precalcClosures i in > [ (x,gotoClosure g i' x) | x <- tokens ]) newSets) Next, we assign each new set a number, which is the index of this set in the list of sets comprising all the sets generated so far plus those generated in this iteration. We also filter out those sets that are new, i.e. don't exist in the current list of sets, so that they can be added. We also have to make sure that there are no duplicate sets in the *current* batch of goto(I,X) sets, as this could be disastrous. I think I've squished this one with the '++ reverse newSets' in numberSets. numberSets is built this way so we can use it quite neatly with a foldr. Unfortunately, the code's a little opaque. > numberSets > :: [(Name,Set Lr0Item)] > -> (Int, > [[(Name,Int)]], > [Set Lr0Item]) > -> (Int, [[(Name,Int)]], [Set Lr0Item]) > > numberSets [] (i,gotos',newSets') = (i,([]:gotos'),newSets') > numberSets ((x,gotoix):rest) (i,g':gotos',newSets') > = numberSets rest > (case indexInto 0 gotoix (itemSets ++ reverse newSets') of > Just j -> (i, ((x,j):g'):gotos', newSets') > Nothing -> (i+1,((x,i):g'):gotos', gotoix:newSets')) > numberSets _ _ = error "genLR0items/numberSets: Unhandled case" Finally, do some fiddling around to get this all in the form we want. > intgotos :: [[(Name,Int)]] > newNewSets :: [Set Lr0Item] > (_, ([]:intgotos), newNewSets) = > foldr numberSets (length newOldSets, [[]], []) gotos > indexInto :: Eq a => Int -> a -> [a] -> Maybe Int > indexInto _ _ [] = Nothing > indexInto i x (y:ys) | x == y = Just i > | otherwise = let j = i + 1 in j `seq` indexInto j x ys ----------------------------------------------------------------------------- Computing propagation of lookaheads ToDo: generate this info into an array to be used in the subsequent calcLookaheads pass. > propLookaheads > :: Grammar > -> [(Set Lr0Item,[(Name,Int)])] -- LR(0) kernel sets > -> ([Name] -> NameSet) -- First function > -> ( > [(Int, Lr0Item, NameSet)], -- spontaneous lookaheads > Array Int [(Lr0Item, Int, Lr0Item)] -- propagated lookaheads > ) > propLookaheads gram sets first = (concat s, array (0,length sets - 1) > [ (a,b) | (a,b) <- p ]) > where > (s,p) = unzip (zipWith propLASet sets [0..]) > propLASet :: (Set Lr0Item, [(Name, Int)]) -> Int -> ([(Int, Lr0Item, NameSet)],(Int,[(Lr0Item, Int, Lr0Item)])) > propLASet (set,goto) i = (start_spont ++ concat s', (i, concat p')) > where > (s',p') = unzip (map propLAItem (Set.toAscList set)) > -- spontaneous EOF lookaheads for each start state & rule... > start_info :: [(String, Name, Name, Bool)] > start_info = starts gram > start_spont :: [(Int, Lr0Item ,NameSet)] > start_spont = [ (start, (Lr0 start 0), > NameSet.singleton (startLookahead gram partial)) > | (start, (_,_,_,partial)) <- > zip [0..] start_info] > propLAItem :: Lr0Item -> ([(Int, Lr0Item, NameSet)], [(Lr0Item, Int, Lr0Item)]) > propLAItem item@(Lr0 rule dot) = (spontaneous, propagated) > where > lookupGoto msg x = maybe (error msg) id (lookup x goto) > j = closure1 gram first [Lr1 rule dot (NameSet.singleton dummyTok)] > spontaneous :: [(Int, Lr0Item, NameSet)] > spontaneous = do > (Lr1 rule' dot' ts) <- j > let ts' = NameSet.delete dummyTok ts > guard (not $ NameSet.null ts') > maybeToList $ do r <- findRule gram rule' dot' > return ( lookupGoto "spontaneous" r > , Lr0 rule' (dot' + 1) > , ts' ) > propagated :: [(Lr0Item, Int, Lr0Item)] > propagated = do > (Lr1 rule' dot' ts) <- j > guard $ NameSet.member dummyTok ts > maybeToList $ do r <- findRule gram rule' dot' > return ( item > , lookupGoto "propagated" r > , Lr0 rule' (dot' + 1) ) The lookahead for a start rule depends on whether it was declared with %name or %partial: a %name parser is assumed to parse the whole input, ending with EOF, whereas a %partial parser may parse only a part of the input: it accepts when the error token is found. > startLookahead :: Grammar -> Bool -> Name > startLookahead gram partial = if partial then errorTok else eof_term gram ----------------------------------------------------------------------------- Calculate lookaheads Special version using a mutable array: > calcLookaheads > :: Int -- number of states > -> [(Int, Lr0Item, NameSet)] -- spontaneous lookaheads > -> Array Int [(Lr0Item, Int, Lr0Item)] -- propagated lookaheads > -> Array Int [(Lr0Item, NameSet)] > calcLookaheads n_states spont prop > = runST $ do > arr <- newArray (0,n_states) [] > propagate arr (fold_lookahead spont) > freeze arr > where > propagate :: STArray s Int [(Lr0Item, NameSet)] > -> [(Int, Lr0Item, NameSet)] -> ST s () > propagate _ [] = return () > propagate arr new = do > let > items = [ (i,item'',s) | (j,item,s) <- new, > (item',i,item'') <- prop ! j, > item == item' ] > new_new <- get_new arr items [] > add_lookaheads arr new > propagate arr new_new This function is needed to merge all the (set_no,item,name) triples into (set_no, item, set name) triples. It can be removed when we get the spontaneous lookaheads in the right form to begin with (ToDo). > add_lookaheads :: STArray s Int [(Lr0Item, NameSet)] > -> [(Int, Lr0Item, NameSet)] > -> ST s () > add_lookaheads arr = mapM_ $ \(i,item,s) > -> do las <- readArray arr i > writeArray arr i (add_lookahead item s las) > get_new :: STArray s Int [(Lr0Item, NameSet)] > -> [(Int, Lr0Item, NameSet)] > -> [(Int, Lr0Item, NameSet)] > -> ST s [(Int, Lr0Item, NameSet)] > get_new _ [] new = return new > get_new arr (l@(i,_item,_s):las) new = do > state_las <- readArray arr i > get_new arr las (get_new' l state_las new) > add_lookahead :: Lr0Item -> NameSet -> [(Lr0Item,NameSet)] -> > [(Lr0Item,NameSet)] > add_lookahead item s [] = [(item,s)] > add_lookahead item s (m@(item',s') : las) > | item == item' = (item, s `NameSet.union` s') : las > | otherwise = m : add_lookahead item s las > get_new' :: (Int,Lr0Item,NameSet) -> [(Lr0Item,NameSet)] -> > [(Int,Lr0Item,NameSet)] -> [(Int,Lr0Item,NameSet)] > get_new' l [] new = l : new > get_new' l@(i,item,s) ((item',s') : las) new > | item == item' = > let s'' = s NameSet.\\ s' in > if NameSet.null s'' then new else (i,item,s'') : new > | otherwise = > get_new' l las new > fold_lookahead :: [(Int,Lr0Item,NameSet)] -> [(Int,Lr0Item,NameSet)] > fold_lookahead = > map (\cs@(((a,b),_):_) -> (a,b,NameSet.unions $ map snd cs)) . > groupBy ((==) `on` fst) . > sortBy (compare `on` fst) . > map (\(a,b,c) -> ((a,b),c)) ----------------------------------------------------------------------------- Merge lookaheads Stick the lookahead info back into the state table. > mergeLookaheadInfo > :: Array Int [(Lr0Item, NameSet)] -- lookahead info > -> [(Set Lr0Item, [(Name,Int)])] -- state table > -> [ ([Lr1Item], [(Name,Int)]) ] > mergeLookaheadInfo lookaheads sets > = zipWith mergeIntoSet sets [0..] > where > mergeIntoSet :: (Set Lr0Item, [(Name, Int)]) -> Int -> ([Lr1Item], [(Name, Int)]) > mergeIntoSet (items, goto) i > = (map mergeIntoItem (Set.toAscList items), goto) > where > mergeIntoItem :: Lr0Item -> Lr1Item > mergeIntoItem item@(Lr0 rule dot) = Lr1 rule dot la > where la = case [ s | (item',s) <- lookaheads ! i, > item == item' ] of > [] -> NameSet.empty > [x] -> x > _ -> error "mergIntoItem" ----------------------------------------------------------------------------- Generate the goto table This is pretty straightforward, given all the information we stored while generating the LR0 sets of items. Generating the goto table doesn't need lookahead info. > genGotoTable :: Grammar -> [(Set Lr0Item,[(Name,Int)])] -> GotoTable > genGotoTable g sets = gotoTable > where > Grammar{ first_nonterm = fst_nonterm, > first_term = fst_term, > non_terminals = non_terms } = g > > -- goto array doesn't include %start symbols > gotoTable = listArray (0,length sets-1) > [ > (array (fst_nonterm, fst_term-1) [ > (n, maybe NoGoto Goto (lookup n goto)) > | n <- non_terms, > n >= fst_nonterm, n < fst_term ]) > | (_set,goto) <- sets ] ----------------------------------------------------------------------------- Generate the action table > genActionTable :: Grammar -> ([Name] -> NameSet) -> > [([Lr1Item],[(Name,Int)])] -> ActionTable > genActionTable g first sets = actionTable > where > Grammar { first_term = fst_term, > terminals = terms, > starts = starts', > priorities = prios } = g > n_starts = length starts' > isStartRule rule = rule < n_starts -- a bit hacky, but it'll do for now > term_lim = (head terms,last terms) > actionTable = array (0,length sets-1) > [ (set_no, accumArray res > LR'Fail term_lim > (possActions goto set)) > | ((set,goto),set_no) <- zip sets [0..] ] > possAction goto _set (Lr1 rule pos la) = > case findRule g rule pos of > Just t | t >= fst_term || t == errorTok -> > let f j = (t,LR'Shift j p) > p = maybe No id (lookup t prios) > in map f $ maybeToList (lookup t goto) > Nothing > | isStartRule rule > -> let (_,_,_,partial) = starts' !! rule in > [ (startLookahead g partial, LR'Accept{-'-}) ] > | otherwise > -> let Production _ _ _ p = lookupProdNo g rule in > NameSet.toAscList la `zip` repeat (LR'Reduce rule p) > _ -> [] > possActions goto coll = do item <- closure1 g first coll > possAction goto coll item These comments are now out of date! /JS Here's how we resolve conflicts, leaving a complete record of the conflicting actions in an LR'Multiple structure for later output in the info file. Shift/reduce conflicts are always resolved as shift actions, and reduce/reduce conflicts are resolved as a reduce action using the rule with the lowest number (i.e. the rule that comes first in the grammar file.) NOTES on LR'MustFail: this was introduced as part of the precedence parsing changes. The problem with LR'Fail is that it is a soft failure: we sometimes substitute an LR'Fail for an LR'Reduce (eg. when computing default actions), on the grounds that an LR'Fail in this state will also be an LR'Fail in the goto state, so we'll fail eventually. This may not be true with precedence parsing, though. If there are two non-associative operators together, we must fail at this point rather than reducing. Hence the use of LR'MustFail. NOTE: on (LR'Multiple as a) handling PCC [sep04] has changed this to have the following invariants: * the winning action appears only once, in the "a" slot * only reductions appear in the "as" list * there are no duplications This removes complications elsewhere, where LR'Multiples were building up tree structures... > res LR'Fail x = x > res x LR'Fail = x > res LR'MustFail _ = LR'MustFail > res _ LR'MustFail = LR'MustFail > res x x' | x == x' = x > res (LR'Accept) _ = LR'Accept > res _ (LR'Accept) = LR'Accept > res (LR'Multiple as x) (LR'Multiple bs x') > | x == x' = LR'Multiple (nub $ as ++ bs) x > -- merge dropped reductions for identical action > | otherwise > = case res x x' of > LR'Multiple cs a > | a == x -> LR'Multiple (nub $ x' : as ++ bs ++ cs) x > | a == x' -> LR'Multiple (nub $ x : as ++ bs ++ cs) x' > | otherwise -> error "failed invariant in resolve" > -- last means an unexpected change > other -> other > -- merge dropped reductions for clashing actions, but only > -- if they were S/R or R/R > res a@(LR'Multiple _ _) b = res a (LR'Multiple [] b) > res a b@(LR'Multiple _ _) = res (LR'Multiple [] a) b > -- leave cases above to do the appropriate merging > res a@(LR'Shift {}) b@(LR'Reduce {}) = res b a > res a@(LR'Reduce _ p) b@(LR'Shift _ p') > = case (p,p') of > (PrioLowest,PrioLowest) -> LR'MustFail > (_,PrioLowest) -> a > (PrioLowest,_) -> b > (No,_) -> LR'Multiple [a] b -- shift wins > (_,No) -> LR'Multiple [a] b -- shift wins > (Prio c i, Prio _ j) > | i < j -> b > | i > j -> a > | otherwise -> > case c of > LeftAssoc -> a > RightAssoc -> b > None -> LR'MustFail > res a@(LR'Reduce r p) b@(LR'Reduce r' p') > = case (p,p') of > (PrioLowest,PrioLowest) -> > LR'Multiple [a] b -- give to earlier rule? > (_,PrioLowest) -> a > (PrioLowest,_) -> b > (No,_) -> LR'Multiple [a] b -- give to earlier rule? > (_,No) -> LR'Multiple [a] b > (Prio _ i, Prio _ j) > | i < j -> b > | j > i -> a > | r < r' -> LR'Multiple [b] a > | otherwise -> LR'Multiple [a] b > res _ _ = error "confict in resolve" ----------------------------------------------------------------------------- Count the conflicts > countConflicts :: ActionTable -> (Array Int (Int,Int), (Int,Int)) > countConflicts action > = (conflictArray, foldl' (\(a,b) (c,d) -> let ac = a + c; bd = b + d in ac `seq` bd `seq` (ac,bd)) (0,0) conflictList) > > where > > conflictArray = listArray (Array.bounds action) conflictList > conflictList = map countConflictsState (assocs action) > > countConflictsState (_state, actions) > = foldr countMultiples (0,0) (elems actions) > where > countMultiples (LR'Multiple (_:_) (LR'Shift{})) (sr,rr) > = (sr + 1, rr) > countMultiples (LR'Multiple (_:_) (LR'Reduce{})) (sr,rr) > = (sr, rr + 1) > countMultiples (LR'Multiple _ _) _ > = error "bad conflict representation" > countMultiples _ c = c ----------------------------------------------------------------------------- > findRule :: Grammar -> Int -> Int -> Maybe Name > findRule g rule dot = listToMaybe (drop dot lhs) > where Production _ lhs _ _ = lookupProdNo g rule happy-1.20.1.1/src/Lexer.lhs0000644000000000000000000002370507346545000013631 0ustar0000000000000000----------------------------------------------------------------------------- The lexer. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module Lexer ( > Token(..), > TokenId(..), > lexer ) where > import ParseMonad > import Data.Char ( isSpace, isAlphaNum, isDigit, digitToInt ) > data Token > = TokenInfo String TokenId > | TokenNum Int TokenId > | TokenKW TokenId > | TokenEOF > tokenToId :: Token -> TokenId > tokenToId (TokenInfo _ i) = i > tokenToId (TokenNum _ i) = i > tokenToId (TokenKW i) = i > tokenToId TokenEOF = error "tokenToId TokenEOF" > instance Eq Token where > i == i' = tokenToId i == tokenToId i' > instance Ord Token where > i <= i' = tokenToId i <= tokenToId i' > data TokenId > = TokId -- words and symbols > | TokSpecId_TokenType -- %tokentype > | TokSpecId_Token -- %token > | TokSpecId_Name -- %name > | TokSpecId_Partial -- %partial > | TokSpecId_ErrorHandlerType -- %errorhandlertype > | TokSpecId_Lexer -- %lexer > | TokSpecId_ImportedIdentity -- %importedidentity > | TokSpecId_Monad -- %monad > | TokSpecId_Nonassoc -- %nonassoc > | TokSpecId_Left -- %left > | TokSpecId_Right -- %right > | TokSpecId_Prec -- %prec > | TokSpecId_Shift -- %shift > | TokSpecId_Expect -- %expect > | TokSpecId_Error -- %error > | TokSpecId_Attributetype -- %attributetype > | TokSpecId_Attribute -- %attribute > | TokCodeQuote -- stuff inside { .. } > | TokColon -- : > | TokSemiColon -- ; > | TokDoubleColon -- :: > | TokDoublePercent -- %% > | TokBar -- | > | TokNum -- Integer > | TokParenL -- ( > | TokParenR -- ) > | TokComma -- , > deriving (Eq,Ord #ifdef DEBUG > ,Show #endif > ) ToDo: proper text instance here, for use in parser error messages. > lexer :: (Token -> P a) -> P a > lexer cont = mkP lexer' > where lexer' "" = returnToken cont TokenEOF "" > lexer' ('-':'-':r) = lexer' (dropWhile (/= '\n') r) > lexer' ('{':'-':r) = \line -> lexNestedComment line lexer' r line > lexer' (c:rest) = nextLex cont c rest > returnToken :: (t -> P a) -> t -> String -> Int -> ParseResult a > returnToken cont tok = runP (cont tok) > nextLex :: (Token -> P a) -> Char -> String -> Int -> ParseResult a > nextLex cont c = case c of > '\n' -> \rest line -> returnToken lexer cont rest (line+1) > '%' -> lexPercent cont > ':' -> lexColon cont > ';' -> returnToken cont (TokenKW TokSemiColon) > '|' -> returnToken cont (TokenKW TokBar) > '\'' -> lexChar cont > '"'{-"-}-> lexString cont > '{' -> lexCode cont > '(' -> returnToken cont (TokenKW TokParenL) > ')' -> returnToken cont (TokenKW TokParenR) > ',' -> returnToken cont (TokenKW TokComma) > _ > | isSpace c -> runP (lexer cont) > | c >= 'a' && c <= 'z' > || c >= 'A' && c <= 'Z' -> lexId cont c > | isDigit c -> lexNum cont c > _ -> lexError ("lexical error before `" ++ c : "'") Percents come in two forms, in pairs, or followed by a special identifier. > lexPercent :: (Token -> P a) -> [Char] -> Int -> ParseResult a > lexPercent cont s = case s of > '%':rest -> returnToken cont (TokenKW TokDoublePercent) rest > 't':'o':'k':'e':'n':'t':'y':'p':'e':rest -> > returnToken cont (TokenKW TokSpecId_TokenType) rest > 't':'o':'k':'e':'n':rest -> > returnToken cont (TokenKW TokSpecId_Token) rest > 'n':'a':'m':'e':rest -> > returnToken cont (TokenKW TokSpecId_Name) rest > 'p':'a':'r':'t':'i':'a':'l':rest -> > returnToken cont (TokenKW TokSpecId_Partial) rest > 'i':'m':'p':'o':'r':'t':'e':'d':'i':'d':'e':'n':'t':'i':'t':'y':rest -> > returnToken cont (TokenKW TokSpecId_ImportedIdentity) rest > 'm':'o':'n':'a':'d':rest -> > returnToken cont (TokenKW TokSpecId_Monad) rest > 'l':'e':'x':'e':'r':rest -> > returnToken cont (TokenKW TokSpecId_Lexer) rest > 'n':'o':'n':'a':'s':'s':'o':'c':rest -> > returnToken cont (TokenKW TokSpecId_Nonassoc) rest > 'l':'e':'f':'t':rest -> > returnToken cont (TokenKW TokSpecId_Left) rest > 'r':'i':'g':'h':'t':rest -> > returnToken cont (TokenKW TokSpecId_Right) rest > 'p':'r':'e':'c':rest -> > returnToken cont (TokenKW TokSpecId_Prec) rest > 's':'h':'i':'f':'t':rest -> > returnToken cont (TokenKW TokSpecId_Shift) rest > 'e':'x':'p':'e':'c':'t':rest -> > returnToken cont (TokenKW TokSpecId_Expect) rest > 'e':'r':'r':'o':'r':'h':'a':'n':'d':'l':'e':'r':'t':'y':'p':'e':rest -> > returnToken cont (TokenKW TokSpecId_ErrorHandlerType) rest > 'e':'r':'r':'o':'r':rest -> > returnToken cont (TokenKW TokSpecId_Error) rest > 'a':'t':'t':'r':'i':'b':'u':'t':'e':'t':'y':'p':'e':rest -> > returnToken cont (TokenKW TokSpecId_Attributetype) rest > 'a':'t':'t':'r':'i':'b':'u':'t':'e':rest -> > returnToken cont (TokenKW TokSpecId_Attribute) rest > _ -> lexError ("unrecognised directive: %" ++ > takeWhile (not.isSpace) s) s > lexColon :: (Token -> P a) -> [Char] -> Int -> ParseResult a > lexColon cont (':':rest) = returnToken cont (TokenKW TokDoubleColon) rest > lexColon cont rest = returnToken cont (TokenKW TokColon) rest > lexId :: (Token -> P a) -> Char -> String -> Int -> ParseResult a > lexId cont c rest = > readId rest (\ ident rest' -> returnToken cont (TokenInfo (c:ident) TokId) rest') > lexChar :: (Token -> P a) -> String -> Int -> ParseResult a > lexChar cont rest = lexReadChar rest > (\ ident -> returnToken cont (TokenInfo ("'" ++ ident ++ "'") TokId)) > lexString :: (Token -> P a) -> String -> Int -> ParseResult a > lexString cont rest = lexReadString rest > (\ ident -> returnToken cont (TokenInfo ("\"" ++ ident ++ "\"") TokId)) > lexCode :: (Token -> P a) -> String -> Int -> ParseResult a > lexCode cont rest = lexReadCode rest (0 :: Integer) "" cont > lexNum :: (Token -> P a) -> Char -> String -> Int -> ParseResult a > lexNum cont c rest = > readNum rest (\ num rest' -> > returnToken cont (TokenNum (stringToInt (c:num)) TokNum) rest') > where stringToInt = foldl (\n c' -> digitToInt c' + 10*n) 0 > cleanupCode :: String -> String > cleanupCode s = > dropWhile isSpace (reverse (dropWhile isSpace (reverse s))) This has to match for @}@ that are {\em not} in strings. The code here is a bit tricky, but should work in most cases. > lexReadCode :: (Eq a, Num a) > => String -> a -> String -> (Token -> P b) -> Int > -> ParseResult b > lexReadCode s n c = case s of > '\n':r -> \cont l -> lexReadCode r n ('\n':c) cont (l+1) > > '{' :r -> lexReadCode r (n+1) ('{':c) > > '}' :r > | n == 0 -> \cont -> returnToken cont (TokenInfo ( > cleanupCode (reverse c)) TokCodeQuote) r > | otherwise -> lexReadCode r (n-1) ('}':c) > > '"'{-"-}:r -> lexReadString r (\ str r' -> > lexReadCode r' n ('"' : (reverse str) ++ '"' : c)) > > a: '\'':r | isAlphaNum a -> lexReadCode r n ('\'':a:c) > > '\'' :r -> lexReadSingleChar r (\ str r' -> > lexReadCode r' n ((reverse str) ++ '\'' : c)) > > ch:r -> lexReadCode r n (ch:c) > > [] -> \_cont -> lexError "No closing '}' in code segment" [] ---------------------------------------------------------------------------- Utilities that read the rest of a token. > readId :: String -> (String -> String -> a) -> a > readId (c:r) fn | isIdPart c = readId r (fn . (:) c) > readId r fn = fn [] r > readNum :: String -> (String -> String -> a) -> a > readNum (c:r) fn | isDigit c = readNum r (fn . (:) c) > readNum r fn = fn [] r > isIdPart :: Char -> Bool > isIdPart c = > c >= 'a' && c <= 'z' > || c >= 'A' && c <= 'Z' > || c >= '0' && c <= '9' > || c == '_' > lexReadSingleChar :: String -> (String -> String -> a) -> a > lexReadSingleChar ('\\':c:'\'':r) fn = fn ('\\':c:"'") r > lexReadSingleChar (c:'\'':r) fn = fn (c:"'") r > lexReadSingleChar r fn = fn "" r > lexReadChar :: String -> (String -> String -> a) -> a > lexReadChar ('\'':r) fn = fn "" r > lexReadChar ('\\':'\'':r) fn = lexReadChar r (fn . (:) '\\' . (:) '\'') > lexReadChar ('\\':c:r) fn = lexReadChar r (fn . (:) '\\' . (:) c) > lexReadChar (c:r) fn = lexReadChar r (fn . (:) c) > lexReadChar [] fn = fn "" [] > lexReadString :: String -> (String -> String -> a) -> a > lexReadString ('"'{-"-}:r) fn = fn "" r > lexReadString ('\\':'"':r) fn = lexReadString r (fn . (:) '\\' . (:) '"') > lexReadString ('\\':c:r) fn = lexReadString r (fn . (:) '\\' . (:) c) > lexReadString (c:r) fn = lexReadString r (fn . (:) c) > lexReadString [] fn = fn "" [] > lexError :: String -> String -> Int -> ParseResult a > lexError err = runP (lineP >>= \l -> failP (show l ++ ": " ++ err ++ "\n")) > lexNestedComment :: Int -> ([Char] -> Int -> ParseResult a) -> [Char] -> Int > -> ParseResult a > lexNestedComment l cont r = > case r of > '-':'}':r' -> cont r' > '{':'-':r' -> \line -> lexNestedComment line > (\r'' -> lexNestedComment l cont r'') r' line > '\n':r' -> \line -> lexNestedComment l cont r' (line+1) > _:r' -> lexNestedComment l cont r' > "" -> \_ -> lexError "unterminated comment" r l happy-1.20.1.1/src/Main.lhs0000644000000000000000000005312007346545000013430 0ustar0000000000000000----------------------------------------------------------------------------- The main driver. (c) 1993-2003 Andy Gill, Simon Marlow GLR amendments (c) University of Durham, Ben Medlock 2001 ----------------------------------------------------------------------------- > module Main (main) where Path settings auto-generated by Cabal: > import Paths_happy > import ParseMonad > import AbsSyn > import Grammar > import PrettyGrammar > import Parser > import First > import LALR > import ProduceCode (produceParser) > import ProduceGLRCode > import Info (genInfoFile) > import Target (Target(..)) > import System.Console.GetOpt > import Control.Monad ( liftM ) > import System.Environment > import System.Exit (exitWith, ExitCode(..)) > import Data.Char > import System.IO > import Data.Array( assocs, elems, (!) ) > import Data.List( nub, isSuffixOf ) > import Data.Version ( showVersion ) #if defined(mingw32_HOST_OS) > import Foreign.Marshal.Array > import Foreign > import Foreign.C #endif > main :: IO () > main = Read and parse the CLI arguments. > getArgs >>= \ args -> > main2 args > main2 :: [String] -> IO () > main2 args = Read and parse the CLI arguments. > case getOpt Permute argInfo (constArgs ++ args) of > (cli,_,[]) | DumpVersion `elem` cli -> > bye copyright > (cli,_,[]) | DumpHelp `elem` cli -> do > prog <- getProgramName > bye (usageInfo (usageHeader prog) argInfo) > (cli,_,_) | OptDebugParser `elem` cli > && OptArrayTarget `notElem` cli -> do > die "Cannot use debugging without -a\n" > (cli,[fl_name],[]) -> > runParserGen cli fl_name > (_,_,errors) -> do > prog <- getProgramName > die (concat errors ++ > usageInfo (usageHeader prog) argInfo) > where > runParserGen cli fl_name = Open the file. > readFile fl_name >>= \ fl -> > possDelit (reverse fl_name) fl >>= \ (file,name) -> Parse, using bootstrapping parser. > case runP ourParser file 1 of { > Left err -> die (fl_name ++ ':' : err); > Right abssyn@(AbsSyn hd _ _ tl) -> Mangle the syntax into something useful. > case {-# SCC "Mangler" #-} (mangler fl_name abssyn) of { > Left s -> die (unlines s ++ "\n"); > Right g -> #ifdef DEBUG > optPrint cli DumpMangle (putStr (show g)) >> #endif > let first = {-# SCC "First" #-} (mkFirst g) > closures = {-# SCC "Closures" #-} (precalcClosure0 g) > sets = {-# SCC "LR0_Sets" #-} (genLR0items g closures) > _lainfo@(spont,prop) = {-# SCC "Prop" #-} (propLookaheads g sets first) > la = {-# SCC "Calc" #-} (calcLookaheads (length sets) spont prop) > items2 = {-# SCC "Merge" #-} (mergeLookaheadInfo la sets) > goto = {-# SCC "Goto" #-} (genGotoTable g sets) > action = {-# SCC "Action" #-} (genActionTable g first items2) > (conflictArray,(sr,rr)) = {-# SCC "Conflict" #-} (countConflicts action) > in #ifdef DEBUG > optPrint cli DumpLR0 (putStr (show sets)) >> > optPrint cli DumpAction (putStr (show action)) >> > optPrint cli DumpGoto (putStr (show goto)) >> > optPrint cli DumpLA (putStr (show _lainfo)) >> > optPrint cli DumpLA (putStr (show la)) >> #endif Report any unused rules and terminals > let reduction_filter | OptGLR `elem` cli = any_reduction > | otherwise = first_reduction > (unused_rules, unused_terminals) > = find_redundancies reduction_filter g action > in > optIO (not (null unused_rules)) > (hPutStrLn stderr ("unused rules: " ++ show (length unused_rules))) >> > optIO (not (null unused_terminals)) > (hPutStrLn stderr ("unused terminals: " ++ show (length unused_terminals))) >> Print out the info file. > getInfoFileName name cli >>= \info_filename -> > let info = genInfoFile > (map fst sets) > g > action > goto > (token_specs g) > conflictArray > fl_name > unused_rules > unused_terminals > in > (case info_filename of > Just s -> writeFile s info >> > hPutStrLn stderr ("Grammar info written to: " ++ s) > Nothing -> return ()) >> Pretty print the grammar. > getPrettyFileName name cli >>= \pretty_filename -> > (let out = render (ppAbsSyn abssyn) > in > case pretty_filename of > Just s -> writeFile s out >> > hPutStrLn stderr ("Production rules written to: " ++ s) > Nothing -> return ()) >> Report any conflicts in the grammar. > (case expect g of > Just n | n == sr && rr == 0 -> return () > Just _ | rr > 0 -> > die ("The grammar has reduce/reduce conflicts.\n" ++ > "This is not allowed when an expect directive is given\n") > Just _ -> > die ("The grammar has " ++ show sr ++ > " shift/reduce conflicts.\n" ++ > "This is different from the number given in the " ++ > "expect directive\n") > _ -> do > (if sr /= 0 > then hPutStrLn stderr ("shift/reduce conflicts: " ++ show sr) > else return ()) > (if rr /= 0 > then hPutStrLn stderr ("reduce/reduce conflicts: " ++ show rr) > else return ()) > ) >> Now, let's get on with generating the parser. Firstly, find out what kind of code we should generate, and where it should go: > getTarget cli >>= \target -> > getOutputFileName fl_name cli >>= \outfilename -> > getTemplate getDataDir cli >>= \template' -> > getCoerce target cli >>= \opt_coerce -> > getStrict cli >>= \opt_strict -> > getGhc cli >>= \opt_ghc -> Add any special options or imports required by the parsing machinery. > let > header = Just ( > (case hd of Just s -> s; Nothing -> "") > ++ importsToInject cli > ) > in %--------------------------------------- Branch off to GLR parser production > let glr_decode | OptGLR_Decode `elem` cli = TreeDecode > | otherwise = LabelDecode > filtering | OptGLR_Filter `elem` cli = UseFiltering > | otherwise = NoFiltering > ghc_exts | OptGhcTarget `elem` cli = UseGhcExts > (importsToInject cli) > (optsToInject target cli) > | otherwise = NoGhcExts > debug = OptDebugParser `elem` cli > in > if OptGLR `elem` cli > then produceGLRParser outfilename -- specified output file name > template' -- template files directory > action -- action table (:: ActionTable) > goto -- goto table (:: GotoTable) > header -- header from grammar spec > tl -- trailer from grammar spec > (debug, (glr_decode,filtering,ghc_exts)) > -- controls decoding code-gen > g -- grammar object > else %--------------------------------------- Resume normal (ie, non-GLR) processing > let > template = template_file template' target cli opt_coerce in Read in the template file for this target: > readFile template >>= \ templ -> and generate the code. > getMagicName cli >>= \ magic_name -> > let > outfile = produceParser > g > action > goto > (optsToInject target cli) > header > tl > target > opt_coerce > opt_ghc > opt_strict > magic_filter = > case magic_name of > Nothing -> id > Just name' -> > let > small_name = name' > big_name = toUpper (head name') : tail name' > filter_output ('h':'a':'p':'p':'y':rest) = > small_name ++ filter_output rest > filter_output ('H':'a':'p':'p':'y':rest) = > big_name ++ filter_output rest > filter_output (c:cs) = c : filter_output cs > filter_output [] = [] > in > filter_output > in > (if outfilename == "-" then putStr else writeFile outfilename) > (magic_filter (outfile ++ templ)) Successfully Finished. > }} ----------------------------------------------------------------------------- > getProgramName :: IO String > getProgramName = liftM (`withoutSuffix` ".bin") getProgName > where str' `withoutSuffix` suff > | suff `isSuffixOf` str' = take (length str' - length suff) str' > | otherwise = str' > bye :: String -> IO a > bye s = putStr s >> exitWith ExitSuccess > die :: String -> IO a > die s = hPutStr stderr s >> exitWith (ExitFailure 1) > dieHappy :: String -> IO a > dieHappy s = getProgramName >>= \prog -> die (prog ++ ": " ++ s) > optIO :: Bool -> IO a -> IO a > optIO fg io = if fg then io else return (error "optIO") #ifdef DEBUG > optPrint :: [CLIFlags] -> CLIFlags -> IO () -> IO () > optPrint cli pass io = > optIO (elem pass cli) (putStr "\n---------------------\n" >> io) #endif > constArgs :: [String] > constArgs = [] ----------------------------------------------------------------------------- Find unused rules and tokens > find_redundancies > :: (LRAction -> [Int]) -> Grammar -> ActionTable -> ([Int], [String]) > find_redundancies extract_reductions g action_table = > (unused_rules, map (env !) unused_terminals) > where > Grammar { terminals = terms, > token_names = env, > eof_term = eof, > starts = starts', > productions = productions' > } = g > actions = concat (map assocs (elems action_table)) > start_rules = [ 0 .. (length starts' - 1) ] > used_rules = start_rules ++ > nub [ r | (_,a) <- actions, r <- extract_reductions a ] > used_tokens = errorTok : eof : > nub [ t | (t,a) <- actions, is_shift a ] > n_prods = length productions' > unused_terminals = filter (`notElem` used_tokens) terms > unused_rules = filter (`notElem` used_rules ) [0..n_prods-1] > is_shift :: LRAction -> Bool > is_shift (LR'Shift _ _) = True > is_shift (LR'Multiple _ LR'Shift{}) = True > is_shift _ = False --- selects what counts as a reduction when calculating used/unused > any_reduction :: LRAction -> [Int] > any_reduction (LR'Reduce r _) = [r] > any_reduction (LR'Multiple as a) = concatMap any_reduction (a : as) > any_reduction _ = [] > first_reduction :: LRAction -> [Int] > first_reduction (LR'Reduce r _) = [r] > first_reduction (LR'Multiple _ a) = first_reduction a -- eg R/R conflict > first_reduction _ = [] ------------------------------------------------------------------------------ > possDelit :: String -> String -> IO (String,String) > possDelit ('y':'l':'.':nm) fl = return (deLitify fl,reverse nm) > possDelit ('y':'.':nm) fl = return (fl,reverse nm) > possDelit f _ = > dieHappy ("`" ++ reverse f ++ "' does not end in `.y' or `.ly'\n") > deLitify :: String -> String > deLitify = deLit > where > deLit ('>':' ':r) = deLit1 r > deLit ('>':'\t':r) = '\t' : deLit1 r > deLit ('>':'\n':r) = deLit r > deLit ('>':_) = error "Error when de-litify-ing" > deLit ('\n':r) = '\n' : deLit r > deLit r = deLit2 r > deLit1 ('\n':r) = '\n' : deLit r > deLit1 (c:r) = c : deLit1 r > deLit1 [] = [] > deLit2 ('\n':r) = '\n' : deLit r > deLit2 (_:r) = deLit2 r > deLit2 [] = [] ------------------------------------------------------------------------------ The command line arguments. > data CLIFlags = #ifdef DEBUG > DumpMangle > | DumpLR0 > | DumpAction > | DumpGoto > | DumpLA > > | #endif > DumpVersion > | DumpHelp > | OptInfoFile (Maybe String) > | OptPrettyFile (Maybe String) > | OptTemplate String > | OptMagicName String > > | OptGhcTarget > | OptArrayTarget > | OptUseCoercions > | OptDebugParser > | OptStrict > | OptOutputFile String > | OptGLR > | OptGLR_Decode > | OptGLR_Filter > deriving Eq > argInfo :: [OptDescr CLIFlags] > argInfo = [ > Option ['o'] ["outfile"] (ReqArg OptOutputFile "FILE") > "write the output to FILE (default: file.hs)", > Option ['i'] ["info"] (OptArg OptInfoFile "FILE") > "put detailed grammar info in FILE", > Option ['p'] ["pretty"] (OptArg OptPrettyFile "FILE") > "pretty print the production rules to FILE", > Option ['t'] ["template"] (ReqArg OptTemplate "DIR") > "look in DIR for template files", > Option ['m'] ["magic-name"] (ReqArg OptMagicName "NAME") > "use NAME as the symbol prefix instead of \"happy\"", > Option ['s'] ["strict"] (NoArg OptStrict) > "evaluate semantic values strictly (experimental)", > Option ['g'] ["ghc"] (NoArg OptGhcTarget) > "use GHC extensions", > Option ['c'] ["coerce"] (NoArg OptUseCoercions) > "use type coercions (only available with -g)", > Option ['a'] ["array"] (NoArg OptArrayTarget) > "generate an array-based parser", > Option ['d'] ["debug"] (NoArg OptDebugParser) > "produce a debugging parser (only with -a)", > Option ['l'] ["glr"] (NoArg OptGLR) > "Generate a GLR parser for ambiguous grammars", > Option ['k'] ["decode"] (NoArg OptGLR_Decode) > "Generate simple decoding code for GLR result", > Option ['f'] ["filter"] (NoArg OptGLR_Filter) > "Filter the GLR parse forest with respect to semantic usage", > Option ['?'] ["help"] (NoArg DumpHelp) > "display this help and exit", > Option ['V','v'] ["version"] (NoArg DumpVersion) -- ToDo: -v is deprecated > "output version information and exit" #ifdef DEBUG Various debugging/dumping options... > , > Option [] ["mangle"] (NoArg DumpMangle) > "Dump mangled input", > Option [] ["lr0"] (NoArg DumpLR0) > "Dump LR0 item sets", > Option [] ["action"] (NoArg DumpAction) > "Dump action table", > Option [] ["goto"] (NoArg DumpGoto) > "Dump goto table", > Option [] ["lookaheads"] (NoArg DumpLA) > "Dump lookahead info" #endif > ] ----------------------------------------------------------------------------- How would we like our code to be generated? > optToTarget :: CLIFlags -> Maybe Target > optToTarget OptArrayTarget = Just TargetArrayBased > optToTarget _ = Nothing > template_file :: String -> Target -> [CLIFlags] -> Bool -> String > template_file temp_dir target cli _coerce > = temp_dir ++ "/HappyTemplate" ++ array_extn ++ ghc_extn ++ debug_extn > where > ghc_extn | OptUseCoercions `elem` cli = "-coerce" > | OptGhcTarget `elem` cli = "-ghc" > | otherwise = "" > > array_extn | target == TargetArrayBased = "-arrays" > | otherwise = "" > > debug_extn | OptDebugParser `elem` cli = "-debug" > | otherwise = "" Note: we need -cpp at the moment because the template has some GHC version-dependent stuff in it. > optsToInject :: Target -> [CLIFlags] -> String > optsToInject tgt cli > | OptGhcTarget `elem` cli = "-XMagicHash -XBangPatterns -XTypeSynonymInstances -XFlexibleInstances -cpp" > | tgt == TargetArrayBased = "-cpp" > | OptDebugParser `elem` cli = "-cpp" > | otherwise = "" > importsToInject :: [CLIFlags] -> String > importsToInject cli = > concat ["\n", import_array, import_bits, > glaexts_import, debug_imports, applicative_imports] > where > glaexts_import | is_ghc = import_glaexts > | otherwise = "" > > debug_imports | is_debug = import_debug > | otherwise = "" > > applicative_imports = import_applicative > > is_ghc = OptGhcTarget `elem` cli > is_debug = OptDebugParser `elem` cli CPP is turned on for -fglasgow-exts, so we can use conditional compilation: > import_glaexts :: String > import_glaexts = "import qualified GHC.Exts as Happy_GHC_Exts\n" > import_array :: String > import_array = "import qualified Data.Array as Happy_Data_Array\n" > import_bits :: String > import_bits = "import qualified Data.Bits as Bits\n" > import_debug :: String > import_debug = > "import qualified System.IO as Happy_System_IO\n" ++ > "import qualified System.IO.Unsafe as Happy_System_IO_Unsafe\n" ++ > "import qualified Debug.Trace as Happy_Debug_Trace\n" > import_applicative :: String > import_applicative = "import Control.Applicative(Applicative(..))\n" ++ > "import Control.Monad (ap)\n" ------------------------------------------------------------------------------ Extract various command-line options. > getTarget :: [CLIFlags] -> IO Target > getTarget cli = case [ t | (Just t) <- map optToTarget cli ] of > (t:ts) | all (==t) ts -> return t > [] -> return TargetHaskell > _ -> dieHappy "multiple target options\n" > getOutputFileName :: String -> [CLIFlags] -> IO String > getOutputFileName ip_file cli > = case [ s | (OptOutputFile s) <- cli ] of > [] -> return (base ++ ".hs") > where (base, _ext) = break (== '.') ip_file > f:fs -> return (last (f:fs)) > getInfoFileName :: String -> [CLIFlags] -> IO (Maybe String) > getInfoFileName base cli > = case [ s | (OptInfoFile s) <- cli ] of > [] -> return Nothing > [f] -> case f of > Nothing -> return (Just (base ++ ".info")) > Just j -> return (Just j) > _many -> dieHappy "multiple --info/-i options\n" > getPrettyFileName :: String -> [CLIFlags] -> IO (Maybe String) > getPrettyFileName base cli > = case [ s | (OptPrettyFile s) <- cli ] of > [] -> return Nothing > [f] -> case f of > Nothing -> return (Just (base ++ ".grammar")) > Just j -> return (Just j) > _many -> dieHappy "multiple --pretty/-p options\n" > getTemplate :: IO String -> [CLIFlags] -> IO String > getTemplate def cli > = case [ s | (OptTemplate s) <- cli ] of > [] -> def > f:fs -> return (last (f:fs)) > getMagicName :: [CLIFlags] -> IO (Maybe String) > getMagicName cli > = case [ s | (OptMagicName s) <- cli ] of > [] -> return Nothing > f:fs -> return (Just (map toLower (last (f:fs)))) > getCoerce :: Target -> [CLIFlags] -> IO Bool > getCoerce _target cli > = if OptUseCoercions `elem` cli > then if OptGhcTarget `elem` cli > then return True > else dieHappy ("-c/--coerce may only be used " ++ > "in conjunction with -g/--ghc\n") > else return False > getGhc :: [CLIFlags] -> IO Bool > getGhc cli = return (OptGhcTarget `elem` cli) > getStrict :: [CLIFlags] -> IO Bool > getStrict cli = return (OptStrict `elem` cli) ------------------------------------------------------------------------------ > copyright :: String > copyright = unlines [ > "Happy Version " ++ showVersion version ++ " Copyright (c) 1993-1996 Andy Gill, Simon Marlow (c) 1997-2005 Simon Marlow","", > "Happy is a Yacc for Haskell, and comes with ABSOLUTELY NO WARRANTY.", > "This program is free software; you can redistribute it and/or modify", > "it under the terms given in the file 'LICENSE' distributed with", > "the Happy sources."] > usageHeader :: String -> String > usageHeader prog = "Usage: " ++ prog ++ " [OPTION...] file\n" ----------------------------------------------------------------------------- happy-1.20.1.1/src/NameSet.hs0000644000000000000000000000015007346545000013717 0ustar0000000000000000module NameSet ( NameSet, module Data.IntSet ) where import Data.IntSet type NameSet = IntSet happy-1.20.1.1/src/ParamRules.hs0000644000000000000000000000760607346545000014453 0ustar0000000000000000module ParamRules(expand_rules, Prod1(..), Rule1(..)) where import AbsSyn import Control.Monad.Writer import Control.Monad.Except import Data.List(partition,intersperse) import qualified Data.Set as S import qualified Data.Map as M -- XXX: Make it work with old GHC. -- | Desugar parameterized productions into non-parameterized ones -- -- This transformation is fairly straightforward: we walk through every rule -- and collect every possible instantiation of parameterized productions. Then, -- we generate a new non-parametrized rule for each of these. expand_rules :: [Rule] -> Either String [Rule1] expand_rules rs = do let (funs,rs1) = split_rules rs (as,is) <- runM2 (mapM (`inst_rule` []) rs1) bs <- make_insts funs (S.toList is) S.empty return (as++bs) type RuleName = String data Inst = Inst RuleName [RuleName] deriving (Eq, Ord) newtype Funs = Funs (M.Map RuleName Rule) -- | Similar to 'Rule', but `Term`'s have been flattened into `RuleName`'s data Rule1 = Rule1 RuleName [Prod1] (Maybe (String, Subst)) -- | Similar to 'Prod', but `Term`'s have been flattened into `RuleName`'s data Prod1 = Prod1 [RuleName] String Int Prec inst_name :: Inst -> RuleName inst_name (Inst f []) = f --inst_name (Inst f xs) = f ++ "(" ++ concat (intersperse "," xs) ++ ")" inst_name (Inst f xs) = f ++ "__" ++ concat (intersperse "__" xs) ++ "__" -- | A renaming substitution used when we instantiate a parameterized rule. type Subst = [(RuleName,RuleName)] type M1 = Writer (S.Set Inst) type M2 = ExceptT String M1 -- | Collects the instances arising from a term. from_term :: Subst -> Term -> M1 RuleName from_term s (App f []) = return $ case lookup f s of Just g -> g Nothing -> f from_term s (App f ts) = do xs <- from_terms s ts let i = Inst f xs tell (S.singleton i) return $ inst_name i -- | Collects the instances arising from a list of terms. from_terms :: Subst -> [Term] -> M1 [RuleName] from_terms s ts = mapM (from_term s) ts -- XXX: perhaps change the line to the line of the instance inst_prod :: Subst -> Prod -> M1 Prod1 inst_prod s (Prod ts c l p) = do xs <- from_terms s ts return (Prod1 xs c l p) inst_rule :: Rule -> [RuleName] -> M2 Rule1 inst_rule (Rule x xs ps t) ts = do s <- build xs ts [] ps1 <- lift $ mapM (inst_prod s) ps let y = inst_name (Inst x ts) return (Rule1 y ps1 (fmap (\x' -> (x',s)) t)) where build (x':xs') (t':ts') m = build xs' ts' ((x',t'):m) build [] [] m = return m build xs' [] _ = err ("Need " ++ show (length xs') ++ " more arguments") build _ ts' _ = err (show (length ts') ++ " arguments too many.") err m = throwError ("In " ++ inst_name (Inst x ts) ++ ": " ++ m) make_rule :: Funs -> Inst -> M2 Rule1 make_rule (Funs funs) (Inst f xs) = case M.lookup f funs of Just r -> inst_rule r xs Nothing -> throwError ("Undefined rule: " ++ f) runM2 :: ExceptT e (Writer w) a -> Either e (a, w) runM2 m = case runWriter (runExceptT m) of (Left e,_) -> Left e (Right a,xs) -> Right (a,xs) make_insts :: Funs -> [Inst] -> S.Set Inst -> Either String [Rule1] make_insts _ [] _ = return [] make_insts funs is done = do (as,ws) <- runM2 (mapM (make_rule funs) is) let done1 = S.union (S.fromList is) done let is1 = filter (not . (`S.member` done1)) (S.toList ws) bs <- make_insts funs is1 done1 return (as++bs) split_rules :: [Rule] -> (Funs,[Rule]) split_rules rs = let (xs,ys) = partition has_args rs in (Funs (M.fromList [ (x,r) | r@(Rule x _ _ _) <- xs ]),ys) where has_args (Rule _ args _ _) = not (null args) happy-1.20.1.1/src/ParseMonad.hs0000644000000000000000000000060607346545000014422 0ustar0000000000000000module ParseMonad where import Control.Monad.Reader type ParseResult = Either String type P a = ReaderT (String, Int) ParseResult a failP :: String -> P a failP str = ReaderT (\_ -> Left str) mkP :: (String -> Int -> ParseResult a) -> P a mkP = ReaderT . uncurry runP :: P a -> String -> Int -> ParseResult a runP f s l = runReaderT f (s, l) lineP :: P Int lineP = asks snd happy-1.20.1.1/src/Parser.hs0000644000000000000000000014311407346545000013627 0ustar0000000000000000{-# OPTIONS_GHC -w #-} {-# OPTIONS -XMagicHash -XBangPatterns -XTypeSynonymInstances -XFlexibleInstances -cpp #-} #if __GLASGOW_HASKELL__ >= 710 {-# OPTIONS_GHC -XPartialTypeSignatures #-} #endif {-# OPTIONS_GHC -w #-} module Parser (ourParser,AbsSyn) where import ParseMonad import AbsSyn import Lexer import qualified Data.Array as Happy_Data_Array import qualified Data.Bits as Bits import qualified GHC.Exts as Happy_GHC_Exts import Control.Applicative(Applicative(..)) import Control.Monad (ap) -- parser produced by Happy Version 1.20.0 newtype HappyAbsSyn = HappyAbsSyn HappyAny #if __GLASGOW_HASKELL__ >= 607 type HappyAny = Happy_GHC_Exts.Any #else type HappyAny = forall a . a #endif newtype HappyWrap4 = HappyWrap4 (AbsSyn) happyIn4 :: (AbsSyn) -> (HappyAbsSyn ) happyIn4 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap4 x) {-# INLINE happyIn4 #-} happyOut4 :: (HappyAbsSyn ) -> HappyWrap4 happyOut4 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut4 #-} newtype HappyWrap5 = HappyWrap5 ([Rule]) happyIn5 :: ([Rule]) -> (HappyAbsSyn ) happyIn5 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap5 x) {-# INLINE happyIn5 #-} happyOut5 :: (HappyAbsSyn ) -> HappyWrap5 happyOut5 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut5 #-} newtype HappyWrap6 = HappyWrap6 (Rule) happyIn6 :: (Rule) -> (HappyAbsSyn ) happyIn6 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap6 x) {-# INLINE happyIn6 #-} happyOut6 :: (HappyAbsSyn ) -> HappyWrap6 happyOut6 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut6 #-} newtype HappyWrap7 = HappyWrap7 ([String]) happyIn7 :: ([String]) -> (HappyAbsSyn ) happyIn7 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap7 x) {-# INLINE happyIn7 #-} happyOut7 :: (HappyAbsSyn ) -> HappyWrap7 happyOut7 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut7 #-} newtype HappyWrap8 = HappyWrap8 ([String]) happyIn8 :: ([String]) -> (HappyAbsSyn ) happyIn8 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap8 x) {-# INLINE happyIn8 #-} happyOut8 :: (HappyAbsSyn ) -> HappyWrap8 happyOut8 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut8 #-} newtype HappyWrap9 = HappyWrap9 ([Prod]) happyIn9 :: ([Prod]) -> (HappyAbsSyn ) happyIn9 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap9 x) {-# INLINE happyIn9 #-} happyOut9 :: (HappyAbsSyn ) -> HappyWrap9 happyOut9 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut9 #-} newtype HappyWrap10 = HappyWrap10 (Prod) happyIn10 :: (Prod) -> (HappyAbsSyn ) happyIn10 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap10 x) {-# INLINE happyIn10 #-} happyOut10 :: (HappyAbsSyn ) -> HappyWrap10 happyOut10 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut10 #-} newtype HappyWrap11 = HappyWrap11 (Term) happyIn11 :: (Term) -> (HappyAbsSyn ) happyIn11 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap11 x) {-# INLINE happyIn11 #-} happyOut11 :: (HappyAbsSyn ) -> HappyWrap11 happyOut11 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut11 #-} newtype HappyWrap12 = HappyWrap12 ([Term]) happyIn12 :: ([Term]) -> (HappyAbsSyn ) happyIn12 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap12 x) {-# INLINE happyIn12 #-} happyOut12 :: (HappyAbsSyn ) -> HappyWrap12 happyOut12 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut12 #-} newtype HappyWrap13 = HappyWrap13 ([Term]) happyIn13 :: ([Term]) -> (HappyAbsSyn ) happyIn13 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap13 x) {-# INLINE happyIn13 #-} happyOut13 :: (HappyAbsSyn ) -> HappyWrap13 happyOut13 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut13 #-} newtype HappyWrap14 = HappyWrap14 ([Term]) happyIn14 :: ([Term]) -> (HappyAbsSyn ) happyIn14 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap14 x) {-# INLINE happyIn14 #-} happyOut14 :: (HappyAbsSyn ) -> HappyWrap14 happyOut14 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut14 #-} newtype HappyWrap15 = HappyWrap15 (Prec) happyIn15 :: (Prec) -> (HappyAbsSyn ) happyIn15 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap15 x) {-# INLINE happyIn15 #-} happyOut15 :: (HappyAbsSyn ) -> HappyWrap15 happyOut15 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut15 #-} newtype HappyWrap16 = HappyWrap16 ([Directive String]) happyIn16 :: ([Directive String]) -> (HappyAbsSyn ) happyIn16 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap16 x) {-# INLINE happyIn16 #-} happyOut16 :: (HappyAbsSyn ) -> HappyWrap16 happyOut16 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut16 #-} newtype HappyWrap17 = HappyWrap17 (Directive String) happyIn17 :: (Directive String) -> (HappyAbsSyn ) happyIn17 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap17 x) {-# INLINE happyIn17 #-} happyOut17 :: (HappyAbsSyn ) -> HappyWrap17 happyOut17 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut17 #-} newtype HappyWrap18 = HappyWrap18 (Maybe String) happyIn18 :: (Maybe String) -> (HappyAbsSyn ) happyIn18 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap18 x) {-# INLINE happyIn18 #-} happyOut18 :: (HappyAbsSyn ) -> HappyWrap18 happyOut18 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut18 #-} newtype HappyWrap19 = HappyWrap19 ([(String,String)]) happyIn19 :: ([(String,String)]) -> (HappyAbsSyn ) happyIn19 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap19 x) {-# INLINE happyIn19 #-} happyOut19 :: (HappyAbsSyn ) -> HappyWrap19 happyOut19 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut19 #-} newtype HappyWrap20 = HappyWrap20 ((String,String)) happyIn20 :: ((String,String)) -> (HappyAbsSyn ) happyIn20 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap20 x) {-# INLINE happyIn20 #-} happyOut20 :: (HappyAbsSyn ) -> HappyWrap20 happyOut20 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut20 #-} newtype HappyWrap21 = HappyWrap21 ([String]) happyIn21 :: ([String]) -> (HappyAbsSyn ) happyIn21 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap21 x) {-# INLINE happyIn21 #-} happyOut21 :: (HappyAbsSyn ) -> HappyWrap21 happyOut21 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut21 #-} newtype HappyWrap22 = HappyWrap22 (Maybe String) happyIn22 :: (Maybe String) -> (HappyAbsSyn ) happyIn22 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap22 x) {-# INLINE happyIn22 #-} happyOut22 :: (HappyAbsSyn ) -> HappyWrap22 happyOut22 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut22 #-} happyInTok :: (Token) -> (HappyAbsSyn ) happyInTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyInTok #-} happyOutTok :: (HappyAbsSyn ) -> (Token) happyOutTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOutTok #-} happyExpList :: HappyAddr happyExpList = HappyA# "\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x08\x00\x00\x00\xe0\x7f\x3e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc0\xff\x7c\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x40\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x02\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x50\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x60\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# {-# NOINLINE happyExpListPerState #-} happyExpListPerState st = token_strs_expected where token_strs = ["error","%dummy","%start_ourParser","parser","rules","rule","params","comma_ids","prods","prod","term","terms","terms_rev","comma_terms","prec","tokInfos","tokInfo","optStart","tokenSpecs","tokenSpec","ids","optCode","id","spec_tokentype","spec_token","spec_name","spec_partial","spec_lexer","spec_imported_identity","spec_monad","spec_nonassoc","spec_left","spec_right","spec_prec","spec_shift","spec_expect","spec_error","spec_errorhandlertype","spec_attribute","spec_attributetype","code","int","\":\"","\";\"","\"::\"","\"%%\"","\"|\"","\"(\"","\")\"","\",\"","%eof"] bit_start = st Prelude.* 51 bit_end = (st Prelude.+ 1) Prelude.* 51 read_bit = readArrayBit happyExpList bits = Prelude.map read_bit [bit_start..bit_end Prelude.- 1] bits_indexed = Prelude.zip bits [0..50] token_strs_expected = Prelude.concatMap f bits_indexed f (Prelude.False, _) = [] f (Prelude.True, nr) = [token_strs Prelude.!! nr] happyActOffsets :: HappyAddr happyActOffsets = HappyA# "\x03\x00\x03\x00\x23\x00\x00\x00\x29\x00\xff\xff\x00\x00\x3e\x00\x4f\x00\x51\x00\x52\x00\x41\x00\x00\x00\x42\x00\x55\x00\x55\x00\x55\x00\x43\x00\x45\x00\x58\x00\x59\x00\x48\x00\x00\x00\x4a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x5b\x00\x00\x00\x00\x00\x4b\x00\x4c\x00\x5f\x00\x5f\x00\x00\x00\x60\x00\x50\x00\x00\x00\x00\x00\x61\x00\x0b\x00\x00\x00\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x53\x00\x00\x00\x00\x00\x56\x00\x30\x00\x63\x00\x00\x00\x00\x00\xf9\xff\x00\x00\x64\x00\x57\x00\x00\x00\x0a\x00\x00\x00\x54\x00\x00\x00\x15\x00\x67\x00\x5a\x00\x00\x00\x6a\x00\x00\x00\x6b\x00\x00\x00\x5c\x00\x6d\x00\x00\x00\x6f\x00\x5d\x00\x70\x00\x00\x00\x70\x00\x00\x00\x00\x00\x62\x00\x00\x00\x14\x00\x00\x00\x72\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyGotoOffsets :: HappyAddr happyGotoOffsets = HappyA# "\x12\x00\x65\x00\x3c\x00\x00\x00\x00\x00\x68\x00\x00\x00\x00\x00\x3d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x69\x00\x6c\x00\x6e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x71\x00\x00\x00\x00\x00\x00\x00\x00\x00\x73\x00\x75\x00\x00\x00\x3f\x00\x00\x00\x00\x00\x00\x00\x49\x00\x11\x00\x00\x00\x76\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x77\x00\x00\x00\x00\x00\x00\x00\x00\x00\x13\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x79\x00\x7e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x16\x00\x00\x00\x00\x00\x00\x00\x00\x00\x31\x00\x00\x00\x36\x00\x00\x00\x3b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x7f\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyAdjustOffset :: Happy_GHC_Exts.Int# -> Happy_GHC_Exts.Int# happyAdjustOffset off = off happyDefActions :: HappyAddr happyDefActions = HappyA# "\xc9\xff\x00\x00\x00\x00\xca\xff\x00\x00\x00\x00\xe4\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xdf\xff\x00\x00\xcb\xff\xcb\xff\xcb\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xd3\xff\x00\x00\xd4\xff\xd5\xff\xd6\xff\xd8\xff\xcb\xff\xd7\xff\xd9\xff\xdd\xff\x00\x00\xd0\xff\xd0\xff\xe2\xff\xce\xff\x00\x00\xe3\xff\xe5\xff\x00\x00\xc9\xff\xfc\xff\xf7\xff\xcd\xff\xcf\xff\xe1\xff\xd1\xff\xe0\xff\xde\xff\xdc\xff\xcc\xff\xd2\xff\xdb\xff\x00\x00\x00\x00\xfd\xff\xfe\xff\x00\x00\xf6\xff\xed\xff\x00\x00\xda\xff\x00\x00\xf9\xff\xf3\xff\xec\xff\xe6\xff\xee\xff\xf0\xff\xf8\xff\x00\x00\xf5\xff\x00\x00\xeb\xff\x00\x00\x00\x00\xe7\xff\xed\xff\x00\x00\xed\xff\xfb\xff\xed\xff\xf4\xff\xe8\xff\xf1\xff\xea\xff\x00\x00\xef\xff\x00\x00\xf2\xff\xfa\xff\xe9\xff"# happyCheck :: HappyAddr happyCheck = HappyA# "\xff\xff\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x0a\x00\x0b\x00\x01\x00\x01\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x12\x00\x00\x00\x02\x00\x1b\x00\x1c\x00\x13\x00\x18\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x07\x00\x13\x00\x15\x00\x0a\x00\x0c\x00\x0d\x00\x12\x00\x12\x00\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x0a\x00\x0b\x00\x1b\x00\x1c\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x12\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x05\x00\x06\x00\x07\x00\x08\x00\x09\x00\x15\x00\x1d\x00\x17\x00\x0c\x00\x0d\x00\x01\x00\x02\x00\x0f\x00\x10\x00\x0f\x00\x10\x00\x01\x00\x13\x00\x01\x00\x01\x00\x13\x00\x13\x00\x01\x00\x14\x00\x13\x00\x01\x00\x01\x00\x13\x00\x01\x00\x13\x00\x13\x00\x13\x00\x01\x00\x01\x00\x01\x00\x13\x00\x01\x00\x01\x00\x13\x00\x1a\x00\x01\x00\x13\x00\x13\x00\x01\x00\x01\x00\x19\x00\x01\x00\x13\x00\x01\x00\x01\x00\x15\x00\x01\x00\x1a\x00\x0d\x00\xff\xff\x12\x00\x16\x00\x03\x00\x11\x00\x04\x00\xff\xff\x11\x00\xff\xff\x11\x00\xff\xff\x0e\x00\x11\x00\x0e\x00\x0b\x00\x07\x00\x07\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"# happyTable :: HappyAddr happyTable = HappyA# "\x00\x00\x08\x00\x09\x00\x0a\x00\x0b\x00\x0c\x00\x0d\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x50\x00\x2c\x00\x12\x00\x13\x00\x14\x00\x15\x00\x16\x00\x04\x00\x38\x00\x47\x00\x48\x00\x04\x00\x29\x00\x40\x00\x41\x00\x42\x00\x43\x00\x44\x00\x56\x00\x04\x00\x51\x00\x57\x00\x4d\x00\x4e\x00\x39\x00\x02\x00\x08\x00\x09\x00\x0a\x00\x0b\x00\x0c\x00\x0d\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x59\x00\x5a\x00\x12\x00\x13\x00\x14\x00\x15\x00\x16\x00\x53\x00\x41\x00\x42\x00\x43\x00\x44\x00\x51\x00\x41\x00\x42\x00\x43\x00\x44\x00\x5b\x00\x41\x00\x42\x00\x43\x00\x44\x00\x3d\x00\xff\xff\x3e\x00\x05\x00\x06\x00\x29\x00\x2a\x00\x23\x00\x24\x00\x2d\x00\x24\x00\x26\x00\x27\x00\x23\x00\x22\x00\x21\x00\x20\x00\x1d\x00\x1b\x00\x1a\x00\x19\x00\x18\x00\x17\x00\x1d\x00\x35\x00\x33\x00\x32\x00\x30\x00\x26\x00\x2c\x00\x2d\x00\x3c\x00\x46\x00\x36\x00\x38\x00\x46\x00\x3f\x00\x40\x00\x49\x00\x46\x00\x4f\x00\x55\x00\x56\x00\x46\x00\x46\x00\x53\x00\x46\x00\x4a\x00\x27\x00\x00\x00\x02\x00\x5b\x00\x36\x00\x1e\x00\x3a\x00\x00\x00\x1d\x00\x00\x00\x1b\x00\x00\x00\x30\x00\x33\x00\x2e\x00\x4b\x00\x4a\x00\x5c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyReduceArr = Happy_Data_Array.array (1, 54) [ (1 , happyReduce_1), (2 , happyReduce_2), (3 , happyReduce_3), (4 , happyReduce_4), (5 , happyReduce_5), (6 , happyReduce_6), (7 , happyReduce_7), (8 , happyReduce_8), (9 , happyReduce_9), (10 , happyReduce_10), (11 , happyReduce_11), (12 , happyReduce_12), (13 , happyReduce_13), (14 , happyReduce_14), (15 , happyReduce_15), (16 , happyReduce_16), (17 , happyReduce_17), (18 , happyReduce_18), (19 , happyReduce_19), (20 , happyReduce_20), (21 , happyReduce_21), (22 , happyReduce_22), (23 , happyReduce_23), (24 , happyReduce_24), (25 , happyReduce_25), (26 , happyReduce_26), (27 , happyReduce_27), (28 , happyReduce_28), (29 , happyReduce_29), (30 , happyReduce_30), (31 , happyReduce_31), (32 , happyReduce_32), (33 , happyReduce_33), (34 , happyReduce_34), (35 , happyReduce_35), (36 , happyReduce_36), (37 , happyReduce_37), (38 , happyReduce_38), (39 , happyReduce_39), (40 , happyReduce_40), (41 , happyReduce_41), (42 , happyReduce_42), (43 , happyReduce_43), (44 , happyReduce_44), (45 , happyReduce_45), (46 , happyReduce_46), (47 , happyReduce_47), (48 , happyReduce_48), (49 , happyReduce_49), (50 , happyReduce_50), (51 , happyReduce_51), (52 , happyReduce_52), (53 , happyReduce_53), (54 , happyReduce_54) ] happy_n_terms = 30 :: Prelude.Int happy_n_nonterms = 19 :: Prelude.Int happyReduce_1 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_1 = happyReduce 5# 0# happyReduction_1 happyReduction_1 (happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut22 happy_x_1 of { (HappyWrap22 happy_var_1) -> case happyOut16 happy_x_2 of { (HappyWrap16 happy_var_2) -> case happyOut5 happy_x_4 of { (HappyWrap5 happy_var_4) -> case happyOut22 happy_x_5 of { (HappyWrap22 happy_var_5) -> happyIn4 (AbsSyn happy_var_1 (reverse happy_var_2) (reverse happy_var_4) happy_var_5 ) `HappyStk` happyRest}}}} happyReduce_2 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_2 = happySpecReduce_2 1# happyReduction_2 happyReduction_2 happy_x_2 happy_x_1 = case happyOut5 happy_x_1 of { (HappyWrap5 happy_var_1) -> case happyOut6 happy_x_2 of { (HappyWrap6 happy_var_2) -> happyIn5 (happy_var_2 : happy_var_1 )}} happyReduce_3 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_3 = happySpecReduce_1 1# happyReduction_3 happyReduction_3 happy_x_1 = case happyOut6 happy_x_1 of { (HappyWrap6 happy_var_1) -> happyIn5 ([happy_var_1] )} happyReduce_4 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_4 = happyReduce 6# 2# happyReduction_4 happyReduction_4 (happy_x_6 `HappyStk` happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> case happyOutTok happy_x_4 of { (TokenInfo happy_var_4 TokCodeQuote) -> case happyOut9 happy_x_6 of { (HappyWrap9 happy_var_6) -> happyIn6 (Rule happy_var_1 happy_var_2 happy_var_6 (Just happy_var_4) ) `HappyStk` happyRest}}}} happyReduce_5 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_5 = happyReduce 7# 2# happyReduction_5 happyReduction_5 (happy_x_7 `HappyStk` happy_x_6 `HappyStk` happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> case happyOutTok happy_x_4 of { (TokenInfo happy_var_4 TokCodeQuote) -> case happyOut9 happy_x_7 of { (HappyWrap9 happy_var_7) -> happyIn6 (Rule happy_var_1 happy_var_2 happy_var_7 (Just happy_var_4) ) `HappyStk` happyRest}}}} happyReduce_6 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_6 = happyReduce 4# 2# happyReduction_6 happyReduction_6 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut7 happy_x_2 of { (HappyWrap7 happy_var_2) -> case happyOut9 happy_x_4 of { (HappyWrap9 happy_var_4) -> happyIn6 (Rule happy_var_1 happy_var_2 happy_var_4 Nothing ) `HappyStk` happyRest}}} happyReduce_7 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_7 = happySpecReduce_3 3# happyReduction_7 happyReduction_7 happy_x_3 happy_x_2 happy_x_1 = case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn7 (reverse happy_var_2 )} happyReduce_8 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_8 = happySpecReduce_0 3# happyReduction_8 happyReduction_8 = happyIn7 ([] ) happyReduce_9 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_9 = happySpecReduce_1 4# happyReduction_9 happyReduction_9 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> happyIn8 ([happy_var_1] )} happyReduce_10 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_10 = happySpecReduce_3 4# happyReduction_10 happyReduction_10 happy_x_3 happy_x_2 happy_x_1 = case happyOut8 happy_x_1 of { (HappyWrap8 happy_var_1) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokId) -> happyIn8 (happy_var_3 : happy_var_1 )}} happyReduce_11 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_11 = happySpecReduce_3 5# happyReduction_11 happyReduction_11 happy_x_3 happy_x_2 happy_x_1 = case happyOut10 happy_x_1 of { (HappyWrap10 happy_var_1) -> case happyOut9 happy_x_3 of { (HappyWrap9 happy_var_3) -> happyIn9 (happy_var_1 : happy_var_3 )}} happyReduce_12 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_12 = happySpecReduce_1 5# happyReduction_12 happyReduction_12 happy_x_1 = case happyOut10 happy_x_1 of { (HappyWrap10 happy_var_1) -> happyIn9 ([happy_var_1] )} happyReduce_13 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_13 = happyMonadReduce 4# 6# happyReduction_13 happyReduction_13 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) tk = happyThen ((case happyOut12 happy_x_1 of { (HappyWrap12 happy_var_1) -> case happyOut15 happy_x_2 of { (HappyWrap15 happy_var_2) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> ( lineP >>= \l -> return (Prod happy_var_1 happy_var_3 l happy_var_2))}}}) ) (\r -> happyReturn (happyIn10 r)) happyReduce_14 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_14 = happyMonadReduce 3# 6# happyReduction_14 happyReduction_14 (happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) tk = happyThen ((case happyOut12 happy_x_1 of { (HappyWrap12 happy_var_1) -> case happyOut15 happy_x_2 of { (HappyWrap15 happy_var_2) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> ( lineP >>= \l -> return (Prod happy_var_1 happy_var_3 l happy_var_2))}}}) ) (\r -> happyReturn (happyIn10 r)) happyReduce_15 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_15 = happySpecReduce_1 7# happyReduction_15 happyReduction_15 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> happyIn11 (App happy_var_1 [] )} happyReduce_16 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_16 = happyReduce 4# 7# happyReduction_16 happyReduction_16 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut14 happy_x_3 of { (HappyWrap14 happy_var_3) -> happyIn11 (App happy_var_1 (reverse happy_var_3) ) `HappyStk` happyRest}} happyReduce_17 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_17 = happySpecReduce_1 8# happyReduction_17 happyReduction_17 happy_x_1 = case happyOut13 happy_x_1 of { (HappyWrap13 happy_var_1) -> happyIn12 (reverse happy_var_1 )} happyReduce_18 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_18 = happySpecReduce_0 8# happyReduction_18 happyReduction_18 = happyIn12 ([] ) happyReduce_19 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_19 = happySpecReduce_1 9# happyReduction_19 happyReduction_19 happy_x_1 = case happyOut11 happy_x_1 of { (HappyWrap11 happy_var_1) -> happyIn13 ([happy_var_1] )} happyReduce_20 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_20 = happySpecReduce_2 9# happyReduction_20 happyReduction_20 happy_x_2 happy_x_1 = case happyOut13 happy_x_1 of { (HappyWrap13 happy_var_1) -> case happyOut11 happy_x_2 of { (HappyWrap11 happy_var_2) -> happyIn13 (happy_var_2 : happy_var_1 )}} happyReduce_21 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_21 = happySpecReduce_1 10# happyReduction_21 happyReduction_21 happy_x_1 = case happyOut11 happy_x_1 of { (HappyWrap11 happy_var_1) -> happyIn14 ([happy_var_1] )} happyReduce_22 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_22 = happySpecReduce_3 10# happyReduction_22 happyReduction_22 happy_x_3 happy_x_2 happy_x_1 = case happyOut14 happy_x_1 of { (HappyWrap14 happy_var_1) -> case happyOut11 happy_x_3 of { (HappyWrap11 happy_var_3) -> happyIn14 (happy_var_3 : happy_var_1 )}} happyReduce_23 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_23 = happySpecReduce_2 11# happyReduction_23 happyReduction_23 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> happyIn15 (PrecId happy_var_2 )} happyReduce_24 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_24 = happySpecReduce_1 11# happyReduction_24 happyReduction_24 happy_x_1 = happyIn15 (PrecShift ) happyReduce_25 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_25 = happySpecReduce_0 11# happyReduction_25 happyReduction_25 = happyIn15 (PrecNone ) happyReduce_26 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_26 = happySpecReduce_2 12# happyReduction_26 happyReduction_26 happy_x_2 happy_x_1 = case happyOut16 happy_x_1 of { (HappyWrap16 happy_var_1) -> case happyOut17 happy_x_2 of { (HappyWrap17 happy_var_2) -> happyIn16 (happy_var_2 : happy_var_1 )}} happyReduce_27 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_27 = happySpecReduce_1 12# happyReduction_27 happyReduction_27 happy_x_1 = case happyOut17 happy_x_1 of { (HappyWrap17 happy_var_1) -> happyIn16 ([happy_var_1] )} happyReduce_28 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_28 = happySpecReduce_2 13# happyReduction_28 happyReduction_28 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn17 (TokenType happy_var_2 )} happyReduce_29 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_29 = happySpecReduce_2 13# happyReduction_29 happyReduction_29 happy_x_2 happy_x_1 = case happyOut19 happy_x_2 of { (HappyWrap19 happy_var_2) -> happyIn17 (TokenSpec happy_var_2 )} happyReduce_30 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_30 = happySpecReduce_3 13# happyReduction_30 happyReduction_30 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> case happyOut18 happy_x_3 of { (HappyWrap18 happy_var_3) -> happyIn17 (TokenName happy_var_2 happy_var_3 False )}} happyReduce_31 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_31 = happySpecReduce_3 13# happyReduction_31 happyReduction_31 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> case happyOut18 happy_x_3 of { (HappyWrap18 happy_var_3) -> happyIn17 (TokenName happy_var_2 happy_var_3 True )}} happyReduce_32 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_32 = happySpecReduce_1 13# happyReduction_32 happyReduction_32 happy_x_1 = happyIn17 (TokenImportedIdentity ) happyReduce_33 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_33 = happySpecReduce_3 13# happyReduction_33 happyReduction_33 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> happyIn17 (TokenLexer happy_var_2 happy_var_3 )}} happyReduce_34 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_34 = happySpecReduce_2 13# happyReduction_34 happyReduction_34 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn17 (TokenMonad "()" happy_var_2 "Prelude.>>=" "Prelude.return" )} happyReduce_35 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_35 = happySpecReduce_3 13# happyReduction_35 happyReduction_35 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> happyIn17 (TokenMonad happy_var_2 happy_var_3 "Prelude.>>=" "Prelude.return" )}} happyReduce_36 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_36 = happyReduce 4# 13# happyReduction_36 happyReduction_36 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> case happyOutTok happy_x_4 of { (TokenInfo happy_var_4 TokCodeQuote) -> happyIn17 (TokenMonad "()" happy_var_2 happy_var_3 happy_var_4 ) `HappyStk` happyRest}}} happyReduce_37 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_37 = happyReduce 5# 13# happyReduction_37 happyReduction_37 (happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> case happyOutTok happy_x_4 of { (TokenInfo happy_var_4 TokCodeQuote) -> case happyOutTok happy_x_5 of { (TokenInfo happy_var_5 TokCodeQuote) -> happyIn17 (TokenMonad happy_var_2 happy_var_3 happy_var_4 happy_var_5 ) `HappyStk` happyRest}}}} happyReduce_38 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_38 = happySpecReduce_2 13# happyReduction_38 happyReduction_38 happy_x_2 happy_x_1 = case happyOut21 happy_x_2 of { (HappyWrap21 happy_var_2) -> happyIn17 (TokenNonassoc happy_var_2 )} happyReduce_39 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_39 = happySpecReduce_2 13# happyReduction_39 happyReduction_39 happy_x_2 happy_x_1 = case happyOut21 happy_x_2 of { (HappyWrap21 happy_var_2) -> happyIn17 (TokenRight happy_var_2 )} happyReduce_40 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_40 = happySpecReduce_2 13# happyReduction_40 happyReduction_40 happy_x_2 happy_x_1 = case happyOut21 happy_x_2 of { (HappyWrap21 happy_var_2) -> happyIn17 (TokenLeft happy_var_2 )} happyReduce_41 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_41 = happySpecReduce_2 13# happyReduction_41 happyReduction_41 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenNum happy_var_2 TokNum) -> happyIn17 (TokenExpect happy_var_2 )} happyReduce_42 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_42 = happySpecReduce_2 13# happyReduction_42 happyReduction_42 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn17 (TokenError happy_var_2 )} happyReduce_43 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_43 = happySpecReduce_2 13# happyReduction_43 happyReduction_43 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> happyIn17 (TokenErrorHandlerType happy_var_2 )} happyReduce_44 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_44 = happySpecReduce_2 13# happyReduction_44 happyReduction_44 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn17 (TokenAttributetype happy_var_2 )} happyReduce_45 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_45 = happySpecReduce_3 13# happyReduction_45 happyReduction_45 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> happyIn17 (TokenAttribute happy_var_2 happy_var_3 )}} happyReduce_46 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_46 = happySpecReduce_1 14# happyReduction_46 happyReduction_46 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> happyIn18 (Just happy_var_1 )} happyReduce_47 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_47 = happySpecReduce_0 14# happyReduction_47 happyReduction_47 = happyIn18 (Nothing ) happyReduce_48 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_48 = happySpecReduce_2 15# happyReduction_48 happyReduction_48 happy_x_2 happy_x_1 = case happyOut20 happy_x_1 of { (HappyWrap20 happy_var_1) -> case happyOut19 happy_x_2 of { (HappyWrap19 happy_var_2) -> happyIn19 (happy_var_1:happy_var_2 )}} happyReduce_49 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_49 = happySpecReduce_1 15# happyReduction_49 happyReduction_49 happy_x_1 = case happyOut20 happy_x_1 of { (HappyWrap20 happy_var_1) -> happyIn19 ([happy_var_1] )} happyReduce_50 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_50 = happySpecReduce_2 16# happyReduction_50 happyReduction_50 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn20 ((happy_var_1,happy_var_2) )}} happyReduce_51 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_51 = happySpecReduce_2 17# happyReduction_51 happyReduction_51 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut21 happy_x_2 of { (HappyWrap21 happy_var_2) -> happyIn21 (happy_var_1 : happy_var_2 )}} happyReduce_52 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_52 = happySpecReduce_0 17# happyReduction_52 happyReduction_52 = happyIn21 ([] ) happyReduce_53 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_53 = happySpecReduce_1 18# happyReduction_53 happyReduction_53 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokCodeQuote) -> happyIn22 (Just happy_var_1 )} happyReduce_54 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_54 = happySpecReduce_0 18# happyReduction_54 happyReduction_54 = happyIn22 (Nothing ) happyNewToken action sts stk = lexer(\tk -> let cont i = happyDoAction i tk action sts stk in case tk of { TokenEOF -> happyDoAction 29# tk action sts stk; TokenInfo happy_dollar_dollar TokId -> cont 1#; TokenKW TokSpecId_TokenType -> cont 2#; TokenKW TokSpecId_Token -> cont 3#; TokenKW TokSpecId_Name -> cont 4#; TokenKW TokSpecId_Partial -> cont 5#; TokenKW TokSpecId_Lexer -> cont 6#; TokenKW TokSpecId_ImportedIdentity -> cont 7#; TokenKW TokSpecId_Monad -> cont 8#; TokenKW TokSpecId_Nonassoc -> cont 9#; TokenKW TokSpecId_Left -> cont 10#; TokenKW TokSpecId_Right -> cont 11#; TokenKW TokSpecId_Prec -> cont 12#; TokenKW TokSpecId_Shift -> cont 13#; TokenKW TokSpecId_Expect -> cont 14#; TokenKW TokSpecId_Error -> cont 15#; TokenKW TokSpecId_ErrorHandlerType -> cont 16#; TokenKW TokSpecId_Attribute -> cont 17#; TokenKW TokSpecId_Attributetype -> cont 18#; TokenInfo happy_dollar_dollar TokCodeQuote -> cont 19#; TokenNum happy_dollar_dollar TokNum -> cont 20#; TokenKW TokColon -> cont 21#; TokenKW TokSemiColon -> cont 22#; TokenKW TokDoubleColon -> cont 23#; TokenKW TokDoublePercent -> cont 24#; TokenKW TokBar -> cont 25#; TokenKW TokParenL -> cont 26#; TokenKW TokParenR -> cont 27#; TokenKW TokComma -> cont 28#; _ -> happyError' (tk, []) }) happyError_ explist 29# tk = happyError' (tk, explist) happyError_ explist _ tk = happyError' (tk, explist) happyThen :: () => P a -> (a -> P b) -> P b happyThen = (Prelude.>>=) happyReturn :: () => a -> P a happyReturn = (Prelude.return) happyParse :: () => Happy_GHC_Exts.Int# -> P (HappyAbsSyn ) happyNewToken :: () => Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyDoAction :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduceArr :: () => Happy_Data_Array.Array Prelude.Int (Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn )) happyThen1 :: () => P a -> (a -> P b) -> P b happyThen1 = happyThen happyReturn1 :: () => a -> P a happyReturn1 = happyReturn happyError' :: () => ((Token), [Prelude.String]) -> P a happyError' tk = (\(tokens, explist) -> happyError) tk ourParser = happySomeParser where happySomeParser = happyThen (happyParse 0#) (\x -> happyReturn (let {(HappyWrap4 x') = happyOut4 x} in x')) happySeq = happyDontSeq happyError :: P a happyError = lineP >>= \l -> failP (show l ++ ": Parse error\n") {-# LINE 1 "templates/GenericTemplate.hs" #-} -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #if __GLASGOW_HASKELL__ > 706 #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Prelude.Bool) #else #define LT(n,m) (n Happy_GHC_Exts.<# m) #define GTE(n,m) (n Happy_GHC_Exts.>=# m) #define EQ(n,m) (n Happy_GHC_Exts.==# m) #endif data Happy_IntList = HappyCons Happy_GHC_Exts.Int# Happy_IntList infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept 0# tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = {- nothing -} case action of 0# -> {- nothing -} happyFail (happyExpListPerState ((Happy_GHC_Exts.I# (st)) :: Prelude.Int)) i tk st -1# -> {- nothing -} happyAccept i tk st n | LT(n,(0# :: Happy_GHC_Exts.Int#)) -> {- nothing -} (happyReduceArr Happy_Data_Array.! rule) i tk st where rule = (Happy_GHC_Exts.I# ((Happy_GHC_Exts.negateInt# ((n Happy_GHC_Exts.+# (1# :: Happy_GHC_Exts.Int#)))))) n -> {- nothing -} happyShift new_state i tk st where new_state = (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) where off = happyAdjustOffset (indexShortOffAddr happyActOffsets st) off_i = (off Happy_GHC_Exts.+# i) check = if GTE(off_i,(0# :: Happy_GHC_Exts.Int#)) then EQ(indexShortOffAddr happyCheck off_i, i) else Prelude.False action | check = indexShortOffAddr happyTable off_i | Prelude.otherwise = indexShortOffAddr happyDefActions st indexShortOffAddr (HappyA# arr) off = Happy_GHC_Exts.narrow16Int# i where i = Happy_GHC_Exts.word2Int# (Happy_GHC_Exts.or# (Happy_GHC_Exts.uncheckedShiftL# high 8#) low) high = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr (off' Happy_GHC_Exts.+# 1#))) low = Happy_GHC_Exts.int2Word# (Happy_GHC_Exts.ord# (Happy_GHC_Exts.indexCharOffAddr# arr off')) off' = off Happy_GHC_Exts.*# 2# {-# INLINE happyLt #-} happyLt x y = LT(x,y) readArrayBit arr bit = Bits.testBit (Happy_GHC_Exts.I# (indexShortOffAddr arr ((unbox_int bit) `Happy_GHC_Exts.iShiftRA#` 4#))) (bit `Prelude.mod` 16) where unbox_int (Happy_GHC_Exts.I# x) = x data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- HappyState data type (not arrays) ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state 0# tk st sts stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "shifting the error token" $ happyDoAction i tk new_state (HappyCons (st) (sts)) (stk) happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons (st) (sts)) ((happyInTok (tk))`HappyStk`stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_0 nt fn j tk st@((action)) sts stk = happyGoto nt j tk st (HappyCons (st) (sts)) (fn `HappyStk` stk) happySpecReduce_1 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_1 nt fn j tk _ sts@((HappyCons (st@(action)) (_))) (v1`HappyStk`stk') = let r = fn v1 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_2 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_2 nt fn j tk _ (HappyCons (_) (sts@((HappyCons (st@(action)) (_))))) (v1`HappyStk`v2`HappyStk`stk') = let r = fn v1 v2 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happySpecReduce_3 i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happySpecReduce_3 nt fn j tk _ (HappyCons (_) ((HappyCons (_) (sts@((HappyCons (st@(action)) (_))))))) (v1`HappyStk`v2`HappyStk`v3`HappyStk`stk') = let r = fn v1 v2 v3 in happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk')) happyReduce k i fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyReduce k nt fn j tk st sts stk = case happyDrop (k Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) sts of sts1@((HappyCons (st1@(action)) (_))) -> let r = fn stk in -- it doesn't hurt to always seq here... happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk in happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn 0# tk st sts stk = happyFail [] 0# tk st sts stk happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons (st) (sts)) of sts1@((HappyCons (st1@(action)) (_))) -> let drop_stk = happyDropStk k stk off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st1) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i in happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons (_) (t)) = happyDrop (n Happy_GHC_Exts.-# (1# :: Happy_GHC_Exts.Int#)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk (n Happy_GHC_Exts.-# (1#::Happy_GHC_Exts.Int#)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = {- nothing -} happyDoAction j tk new_state where off = happyAdjustOffset (indexShortOffAddr happyGotoOffsets st) off_i = (off Happy_GHC_Exts.+# nt) new_state = indexShortOffAddr happyTable off_i ----------------------------------------------------------------------------- -- Error recovery (ERROR_TOK is the error token) -- parse error if we are in recovery and we fail again happyFail explist 0# tk old_st _ stk@(x `HappyStk` _) = let i = (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# (i)) -> i }) in -- trace "failing" $ happyError_ explist i tk {- We don't need state discarding for our restricted implementation of "error". In fact, it can cause some bogus parses, so I've disabled it for now --SDM -- discard a state happyFail ERROR_TOK tk old_st CONS(HAPPYSTATE(action),sts) (saved_tok `HappyStk` _ `HappyStk` stk) = -- trace ("discarding state, depth " ++ show (length stk)) $ DO_ACTION(action,ERROR_TOK,tk,sts,(saved_tok`HappyStk`stk)) -} -- Enter error recovery: generate an error token, -- save the old token and carry on. happyFail explist i tk (action) sts stk = -- trace "entering error recovery" $ happyDoAction 0# tk action sts ((Happy_GHC_Exts.unsafeCoerce# (Happy_GHC_Exts.I# (i))) `HappyStk` stk) -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Prelude.error "Internal Happy error\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_GHC_Exts.Int# -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Prelude.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-1.20.1.1/src/PrettyGrammar.hs0000644000000000000000000000426007346545000015167 0ustar0000000000000000{-# LANGUAGE CPP #-} module PrettyGrammar where #if MIN_VERSION_base(4,11,0) import Prelude hiding ((<>)) #endif import AbsSyn render :: Doc -> String render = maybe "" ($ "") ppAbsSyn :: AbsSyn -> Doc ppAbsSyn (AbsSyn _ ds rs _) = vsep (vcat (map ppDirective ds) : map ppRule rs) ppDirective :: Directive a -> Doc ppDirective dir = case dir of TokenNonassoc xs -> prec "%nonassoc" xs TokenRight xs -> prec "%right" xs TokenLeft xs -> prec "%left" xs _ -> empty where prec x xs = text x <+> hsep (map text xs) ppRule :: Rule -> Doc ppRule (Rule name _ prods _) = text name $$ vcat (zipWith (<+>) starts (map ppProd prods)) where starts = text " :" : repeat (text " |") ppProd :: Prod -> Doc ppProd (Prod ts _ _ p) = psDoc <+> ppPrec p where psDoc = if null ts then text "{- empty -}" else hsep (map ppTerm ts) ppPrec :: Prec -> Doc ppPrec PrecNone = empty ppPrec PrecShift = text "%shift" ppPrec (PrecId x) = text "%prec" <+> text x ppTerm :: Term -> Doc ppTerm (App x ts) = text x <> ppTuple (map ppTerm ts) ppTuple :: [Doc] -> Doc ppTuple [] = empty ppTuple xs = parens (hsep (punctuate comma xs)) -------------------------------------------------------------------------------- -- Pretty printing combinator type Doc = Maybe ShowS empty :: Doc empty = Nothing punctuate :: Doc -> [Doc] -> [Doc] punctuate _ [] = [] punctuate _ [x] = [x] punctuate sep (x : xs) = (x <> sep) : punctuate sep xs comma :: Doc comma = char ',' char :: Char -> Doc char x = Just (showChar x) text :: String -> Doc text x = if null x then Nothing else Just (showString x) (<+>) :: Doc -> Doc -> Doc Nothing <+> y = y x <+> Nothing = x x <+> y = x <> char ' ' <> y (<>) :: Doc -> Doc -> Doc Nothing <> y = y x <> Nothing = x Just x <> Just y = Just (x . y) ($$) :: Doc -> Doc -> Doc Nothing $$ y = y x $$ Nothing = x x $$ y = x <> char '\n' <> y hsep :: [Doc] -> Doc hsep = hcat . punctuate (char ' ') vcat :: [Doc] -> Doc vcat = foldr ($$) empty vsep :: [Doc] -> Doc vsep = vcat . punctuate (char '\n') parens :: Doc -> Doc parens x = char '(' <> x <> char ')' hcat :: [Doc] -> Doc hcat = foldr (<>) empty happy-1.20.1.1/src/ProduceCode.lhs0000644000000000000000000016501607346545000014750 0ustar0000000000000000----------------------------------------------------------------------------- The code generator. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module ProduceCode (produceParser) where > import Paths_happy ( version ) > import Data.Version ( showVersion ) > import Grammar > import Target ( Target(..) ) > import GenUtils ( mapDollarDollar, str, char, nl, strspace, > interleave, interleave', maybestr, > brack, brack' ) > import Data.Maybe ( isJust, isNothing, fromMaybe ) > import Data.Char > import Data.List > import Control.Monad ( forM_ ) > import Control.Monad.ST > import Data.Bits ( setBit ) > import Data.Array.ST ( STUArray ) > import Data.Array.Unboxed ( UArray ) > import Data.Array.MArray > import Data.Array.IArray %----------------------------------------------------------------------------- Produce the complete output file. > produceParser :: Grammar -- grammar info > -> ActionTable -- action table > -> GotoTable -- goto table > -> String -- stuff to go at the top > -> Maybe String -- module header > -> Maybe String -- module trailer > -> Target -- type of code required > -> Bool -- use coercions > -> Bool -- use ghc extensions > -> Bool -- strict parser > -> String > produceParser (Grammar > { productions = prods > , non_terminals = nonterms > , terminals = terms > , types = nt_types > , first_nonterm = first_nonterm' > , eof_term = eof > , first_term = fst_term > , token_names = token_names' > , lexer = lexer' > , imported_identity = imported_identity' > , monad = (use_monad,monad_context,monad_tycon,monad_then,monad_return) > , token_specs = token_rep > , token_type = token_type' > , starts = starts' > , error_handler = error_handler' > , error_sig = error_sig' > , attributetype = attributetype' > , attributes = attributes' > }) > action goto top_options module_header module_trailer > target coerce ghc strict > = ( top_opts > . maybestr module_header . nl > . str comment > -- comment goes *after* the module header, so that we > -- don't screw up any OPTIONS pragmas in the header. > . produceAbsSynDecl . nl > . produceTypes > . produceExpListPerState > . produceActionTable target > . produceReductions > . produceTokenConverter . nl > . produceIdentityStuff > . produceMonadStuff > . produceEntries > . produceStrict strict > . produceAttributes attributes' attributetype' . nl > . maybestr module_trailer . nl > ) "" > where > n_starts = length starts' > token = brack token_type' > > nowarn_opts = str "{-# OPTIONS_GHC -w #-}" . nl > -- XXX Happy-generated code is full of warnings. Some are easy to > -- fix, others not so easy, and others would require GHC version > -- #ifdefs. For now I'm just disabling all of them. > > partTySigs_opts = ifGeGhc710 (str "{-# OPTIONS_GHC -XPartialTypeSignatures #-}" . nl) > > intMaybeHash | ghc = str "Happy_GHC_Exts.Int#" > | otherwise = str "Prelude.Int" > > -- Parsing monad and its constraints > pty = str monad_tycon > pcont = str monad_context > > -- If GHC is enabled, wrap the content in a CPP ifdef that includes the > -- content and tests whether the GHC version is >= 7.10.3 > ifGeGhc710 :: (String -> String) -> String -> String > ifGeGhc710 content | ghc = str "#if __GLASGOW_HASKELL__ >= 710" . nl > . content > . str "#endif" . nl > | otherwise = id > > n_missing_types = length (filter isNothing (elems nt_types)) > happyAbsSyn = str "(HappyAbsSyn " . str wild_tyvars . str ")" > where wild_tyvars = unwords (replicate n_missing_types "_") > > -- This decides how to include (if at all) a type signature > -- See > filterTypeSig :: (String -> String) -> String -> String > filterTypeSig content | n_missing_types == 0 = content > | otherwise = ifGeGhc710 content > > top_opts = > nowarn_opts > . (case top_options of > "" -> str "" > _ -> str (unwords [ "{-# OPTIONS" > , top_options > , "#-}" > ]) . nl) > . partTySigs_opts %----------------------------------------------------------------------------- Make the abstract syntax type declaration, of the form: data HappyAbsSyn a t1 .. tn = HappyTerminal a | HappyAbsSyn1 t1 ... | HappyAbsSynn tn > produceAbsSynDecl If we're using coercions, we need to generate the injections etc. data HappyAbsSyn ti tj tk ... = HappyAbsSyn (where ti, tj, tk are type variables for the non-terminals which don't have type signatures). newtype HappyWrap = HappyWrap ti happyIn :: ti -> HappyAbsSyn ti tj tk ... happyIn x = unsafeCoerce# (HappyWrap x) {-# INLINE happyIn #-} happyOut :: HappyAbsSyn ti tj tk ... -> tn happyOut x = unsafeCoerce# x {-# INLINE happyOut #-} > | coerce > = let > happy_item = str "HappyAbsSyn " . str_tyvars > bhappy_item = brack' happy_item > > inject n ty > = (case ty of > Nothing -> id > Just tystr -> str "newtype " . mkHappyWrap n . str " = " . mkHappyWrap n . strspace . brack tystr . nl) > . mkHappyIn n . str " :: " . typeParam n ty > . str " -> " . bhappy_item . char '\n' > . mkHappyIn n . str " x = Happy_GHC_Exts.unsafeCoerce#" . strspace > . mkHappyWrapCon ty n (str "x") > . nl > . str "{-# INLINE " . mkHappyIn n . str " #-}" > > extract n ty > = mkHappyOut n . str " :: " . bhappy_item > . str " -> " . typeParamOut n ty . char '\n' > . mkHappyOut n . str " x = Happy_GHC_Exts.unsafeCoerce# x\n" > . str "{-# INLINE " . mkHappyOut n . str " #-}" > in > str "newtype " . happy_item . str " = HappyAbsSyn HappyAny\n" -- see NOTE below > . interleave "\n" (map str > [ "#if __GLASGOW_HASKELL__ >= 607", > "type HappyAny = Happy_GHC_Exts.Any", > "#else", > "type HappyAny = forall a . a", > "#endif" ]) > . interleave "\n" > [ inject n ty . nl . extract n ty | (n,ty) <- assocs nt_types ] > -- token injector > . str "happyInTok :: " . token . str " -> " . bhappy_item > . str "\nhappyInTok x = Happy_GHC_Exts.unsafeCoerce# x\n{-# INLINE happyInTok #-}\n" > -- token extractor > . str "happyOutTok :: " . bhappy_item . str " -> " . token > . str "\nhappyOutTok x = Happy_GHC_Exts.unsafeCoerce# x\n{-# INLINE happyOutTok #-}\n" > . str "\n" NOTE: in the coerce case we always coerce all the semantic values to HappyAbsSyn which is declared to be a synonym for Any. This is the type that GHC officially knows nothing about - it's the same type used to implement Dynamic. (in GHC 6.6 and older, Any didn't exist, so we use the closest approximation namely forall a . a). It's vital that GHC doesn't know anything about this type, because it will use any knowledge it has to optimise, and if the knowledge is false then the optimisation may also be false. Previously we used (() -> ()) as the type here, but this led to bogus optimisations (see GHC ticket #1616). Also, note that we must use a newtype instead of just a type synonym, because the otherwise the type arguments to the HappyAbsSyn type constructor will lose information. See happy/tests/bug001 for an example where this matters. ... Otherwise, output the declaration in full... > | otherwise > = str "data HappyAbsSyn " . str_tyvars > . str "\n\t= HappyTerminal " . token > . str "\n\t| HappyErrorToken Prelude.Int\n" > . interleave "\n" > [ str "\t| " . makeAbsSynCon n . strspace . typeParam n ty > | (n, ty) <- assocs nt_types, > (nt_types_index ! n) == n] > where all_tyvars = [ 't':show n | (n, Nothing) <- assocs nt_types ] > str_tyvars = str (unwords all_tyvars) %----------------------------------------------------------------------------- Type declarations of the form: type HappyReduction a b = .... action_0, action_1 :: Int -> HappyReduction a b reduction_1, ... :: HappyReduction a b These are only generated if types for *all* rules are given (and not for array based parsers -- types aren't as important there). > produceTypes > | target == TargetArrayBased = id > | all isJust (elems nt_types) = > happyReductionDefinition . str "\n\n" > . interleave' ",\n " > [ mkActionName i | (i,_action') <- zip [ 0 :: Int .. ] > (assocs action) ] > . str " :: " . str monad_context . str " => " > . intMaybeHash . str " -> " . happyReductionValue . str "\n\n" > . interleave' ",\n " > [ mkReduceFun i | > (i,_action) <- zip [ n_starts :: Int .. ] > (drop n_starts prods) ] > . str " :: " . str monad_context . str " => " > . happyReductionValue . str "\n\n" > | otherwise = id > where tokens = > case lexer' of > Nothing -> char '[' . token . str "] -> " > Just _ -> id > happyReductionDefinition = > str "{- to allow type-synonyms as our monads (likely\n" > . str " - with explicitly-specified bind and return)\n" > . str " - in Haskell98, it seems that with\n" > . str " - /type M a = .../, then /(HappyReduction M)/\n" > . str " - is not allowed. But Happy is a\n" > . str " - code-generator that can just substitute it.\n" > . str "type HappyReduction m = " > . happyReduction (str "m") > . str "\n-}" > happyReductionValue = > str "({-" > . str "HappyReduction " > . brack monad_tycon > . str " = -}" > . happyReduction (brack monad_tycon) > . str ")" > happyReduction m = > str "\n\t " > . intMaybeHash > . str " \n\t-> " . token > . str "\n\t-> HappyState " > . token > . str " (HappyStk HappyAbsSyn -> " . tokens . result > . str ")\n\t" > . str "-> [HappyState " > . token > . str " (HappyStk HappyAbsSyn -> " . tokens . result > . str ")] \n\t-> HappyStk HappyAbsSyn \n\t-> " > . tokens > . result > where result = m . str " HappyAbsSyn" %----------------------------------------------------------------------------- Next, the reduction functions. Each one has the following form: happyReduce_n_m = happyReduce n m reduction where { reduction ( (HappyAbsSynX | HappyTerminal) happy_var_1 : .. (HappyAbsSynX | HappyTerminal) happy_var_q : happyRest) = HappyAbsSynY ( <> ) : happyRest ; reduction _ _ = notHappyAtAll n m where n is the non-terminal number, and m is the rule number. NOTES on monad productions. These look like happyReduce_275 = happyMonadReduce 0# 119# happyReduction_275 happyReduction_275 (happyRest) = happyThen (code) (\r -> happyReturn (HappyAbsSyn r)) why can't we pass the HappyAbsSyn constructor to happyMonadReduce and save duplicating the happyThen/happyReturn in each monad production? Because this would require happyMonadReduce to be polymorphic in the result type of the monadic action, and since in array-based parsers the whole thing is one recursive group, we'd need a type signature on happyMonadReduce to get polymorphic recursion. Sigh. > produceReductions = > interleave "\n\n" > (zipWith produceReduction (drop n_starts prods) [ n_starts .. ]) > produceReduction (Production nt toks (code,vars_used) _) i > | is_monad_prod && (use_monad || imported_identity') > = mkReductionHdr (showInt lt) monad_reduce > . char '(' . interleave " `HappyStk`\n\t" tokPatterns > . str "happyRest) tk\n\t = happyThen (" > . str "(" > . tokLets (char '(' . str code' . char ')') > . str ")" > . (if monad_pass_token then str " tk" else id) > . str "\n\t) (\\r -> happyReturn (" . this_absSynCon . str " r))" > | specReduceFun lt > = mkReductionHdr id ("happySpecReduce_" ++ show lt) > . interleave "\n\t" tokPatterns > . str " = " > . tokLets ( > this_absSynCon . str "\n\t\t " > . char '(' . str code' . str "\n\t)" > ) > . (if coerce || null toks || null vars_used then > id > else > nl . reductionFun . strspace > . interleave " " (replicate (length toks) (str "_")) > . str " = notHappyAtAll ") > | otherwise > = mkReductionHdr (showInt lt) "happyReduce" > . char '(' . interleave " `HappyStk`\n\t" tokPatterns > . str "happyRest)\n\t = " > . tokLets > ( this_absSynCon . str "\n\t\t " > . char '(' . str code'. str "\n\t) `HappyStk` happyRest" > ) > where > (code', is_monad_prod, monad_pass_token, monad_reduce) > = case code of > '%':'%':code1 -> (code1, True, True, "happyMonad2Reduce") > '%':'^':code1 -> (code1, True, True, "happyMonadReduce") > '%':code1 -> (code1, True, False, "happyMonadReduce") > _ -> (code, False, False, "") > -- adjust the nonterminal number for the array-based parser > -- so that nonterminals start at zero. > adjusted_nt | target == TargetArrayBased = nt - first_nonterm' > | otherwise = nt > > mkReductionHdr lt' s = > let tysig = case lexer' of > Nothing -> id > _ | target == TargetArrayBased -> > mkReduceFun i . str " :: " . pcont > . str " => " . intMaybeHash > . str " -> " . str token_type' > . str " -> " . intMaybeHash > . str " -> Happy_IntList -> HappyStk " > . happyAbsSyn . str " -> " > . pty . str " " . happyAbsSyn . str "\n" > | otherwise -> id in > filterTypeSig tysig . mkReduceFun i . str " = " > . str s . strspace . lt' . strspace . showInt adjusted_nt > . strspace . reductionFun . nl > . reductionFun . strspace > > reductionFun = str "happyReduction_" . shows i > > tokPatterns > | coerce = reverse (map mkDummyVar [1 .. length toks]) > | otherwise = reverse (zipWith tokPattern [1..] toks) > > tokPattern n _ | n `notElem` vars_used = char '_' > tokPattern n t | t >= firstStartTok && t < fst_term > = if coerce > then mkHappyWrapCon (nt_types ! t) t (mkHappyVar n) > else brack' ( > makeAbsSynCon t . str " " . mkHappyVar n > ) > tokPattern n t > = if coerce > then mkHappyTerminalVar n t > else str "(HappyTerminal " > . mkHappyTerminalVar n t > . char ')' > > tokLets code'' > | coerce && not (null cases) > = interleave "\n\t" cases > . code'' . str (replicate (length cases) '}') > | otherwise = code'' > > cases = [ str "case " . extract t . strspace . mkDummyVar n > . str " of { " . tokPattern n t . str " -> " > | (n,t) <- zip [1..] toks, > n `elem` vars_used ] > > extract t | t >= firstStartTok && t < fst_term = mkHappyOut t > | otherwise = str "happyOutTok" > > lt = length toks > this_absSynCon | coerce = mkHappyIn nt > | otherwise = makeAbsSynCon nt %----------------------------------------------------------------------------- The token conversion function. > produceTokenConverter > = case lexer' of { > > Nothing -> > str "happyNewToken action sts stk [] =\n\t" > . eofAction "notHappyAtAll" > . str " []\n\n" > . str "happyNewToken action sts stk (tk:tks) =\n\t" > . str "let cont i = " . doAction . str " sts stk tks in\n\t" > . str "case tk of {\n\t" > . interleave ";\n\t" (map doToken token_rep) > . str "_ -> happyError' ((tk:tks), [])\n\t" > . str "}\n\n" > . str "happyError_ explist " . eofTok . str " tk tks = happyError' (tks, explist)\n" > . str "happyError_ explist _ tk tks = happyError' ((tk:tks), explist)\n"; > -- when the token is EOF, tk == _|_ (notHappyAtAll) > -- so we must not pass it to happyError' > Just (lexer'',eof') -> > case (target, ghc) of > (TargetHaskell, True) -> > str "happyNewToken :: " . pcont . str " => " > . str "(Happy_GHC_Exts.Int#\n" > . str " -> Happy_GHC_Exts.Int#\n" > . str " -> " . token . str "\n" > . str " -> HappyState " . token . str " (t -> " > . pty . str " a)\n" > . str " -> [HappyState " . token . str " (t -> " > . pty . str " a)]\n" > . str " -> t\n" > . str " -> " . pty . str " a)\n" > . str " -> [HappyState " . token . str " (t -> " > . pty . str " a)]\n" > . str " -> t\n" > . str " -> " . pty . str " a\n" > _ -> id > . str "happyNewToken action sts stk\n\t= " > . str lexer'' > . str "(\\tk -> " > . str "\n\tlet cont i = " > . doAction > . str " sts stk in\n\t" > . str "case tk of {\n\t" > . str (eof' ++ " -> ") > . eofAction "tk" . str ";\n\t" > . interleave ";\n\t" (map doToken token_rep) > . str "_ -> happyError' (tk, [])\n\t" > . str "})\n\n" > . str "happyError_ explist " . eofTok . str " tk = happyError' (tk, explist)\n" > . str "happyError_ explist _ tk = happyError' (tk, explist)\n"; > -- superfluous pattern match needed to force happyError_ to > -- have the correct type. > } > where > eofAction tk = > (case target of > TargetArrayBased -> > str "happyDoAction " . eofTok . strspace . str tk . str " action" > _ -> str "action " . eofTok . strspace . eofTok > . strspace . str tk . str " (HappyState action)") > . str " sts stk" > eofTok = showInt (tokIndex eof) > > doAction = case target of > TargetArrayBased -> str "happyDoAction i tk action" > _ -> str "action i i tk (HappyState action)" > > doToken (i,tok) > = str (removeDollarDollar tok) > . str " -> cont " > . showInt (tokIndex i) Use a variable rather than '_' to replace '$$', so we can use it on the left hand side of '@'. > removeDollarDollar xs = case mapDollarDollar xs of > Nothing -> xs > Just fn -> fn "happy_dollar_dollar" > mkHappyTerminalVar :: Int -> Int -> String -> String > mkHappyTerminalVar i t = > case tok_str_fn of > Nothing -> pat > Just fn -> brack (fn (pat [])) > where > tok_str_fn = case lookup t token_rep of > Nothing -> Nothing > Just str' -> mapDollarDollar str' > pat = mkHappyVar i > tokIndex > = case target of > TargetHaskell -> id > TargetArrayBased -> \i -> i - n_nonterminals - n_starts - 2 > -- tokens adjusted to start at zero, see ARRAY_NOTES %----------------------------------------------------------------------------- Action Tables. Here we do a bit of trickery and replace the normal default action (failure) for each state with at least one reduction action. For each such state, we pick one reduction action to be the default action. This should make the code smaller without affecting the speed. It changes the sematics for errors, however; errors could be detected in a different state now (but they'll still be detected at the same point in the token stream). Further notes on default cases: Default reductions are important when error recovery is considered: we don't allow reductions whilst in error recovery, so we'd like the parser to automatically reduce down to a state where the error token can be shifted before entering error recovery. This is achieved by using default reductions wherever possible. One case to consider is: State 345 con -> conid . (rule 186) qconid -> conid . (rule 212) error reduce using rule 212 '{' reduce using rule 186 etc. we should make reduce_212 the default reduction here. So the rules become: * if there is a production error -> reduce_n then make reduce_n the default action. * if there is a non-reduce action for the error token, the default action for this state must be "fail". * otherwise pick the most popular reduction in this state for the default. * if there are no reduce actions in this state, then the default action remains 'enter error recovery'. This gives us an invariant: there won't ever be a production of the type 'error -> reduce_n' explicitly in the grammar, which means that whenever an unexpected token occurs, either the parser will reduce straight back to a state where the error token can be shifted, or if none exists, we'll get a parse error. In theory, we won't need the machinery to discard states in the parser... > produceActionTable TargetHaskell > = foldr (.) id (map (produceStateFunction goto) (assocs action)) > > produceActionTable TargetArrayBased > = produceActionArray > . produceReduceArray > . str "happy_n_terms = " . shows n_terminals . str " :: Prelude.Int\n" > . str "happy_n_nonterms = " . shows n_nonterminals . str " :: Prelude.Int\n\n" > > produceExpListPerState > = produceExpListArray > . str "{-# NOINLINE happyExpListPerState #-}\n" > . str "happyExpListPerState st =\n" > . str " token_strs_expected\n" > . str " where token_strs = " . str (show $ elems token_names') . str "\n" > . str " bit_start = st Prelude.* " . str (show nr_tokens) . str "\n" > . str " bit_end = (st Prelude.+ 1) Prelude.* " . str (show nr_tokens) . str "\n" > . str " read_bit = readArrayBit happyExpList\n" > . str " bits = Prelude.map read_bit [bit_start..bit_end Prelude.- 1]\n" > . str " bits_indexed = Prelude.zip bits [0.." > . str (show (nr_tokens - 1)) . str "]\n" > . str " token_strs_expected = Prelude.concatMap f bits_indexed\n" > . str " f (Prelude.False, _) = []\n" > . str " f (Prelude.True, nr) = [token_strs Prelude.!! nr]\n" > . str "\n" > where (first_token, last_token) = bounds token_names' > nr_tokens = last_token - first_token + 1 > > produceStateFunction goto' (state, acts) > = foldr (.) id (map produceActions assocs_acts) > . foldr (.) id (map produceGotos (assocs gotos)) > . mkActionName state > . (if ghc > then str " x = happyTcHack x " > else str " _ = ") > . mkAction default_act > . (case default_act of > LR'Fail -> callHappyExpListPerState > LR'MustFail -> callHappyExpListPerState > _ -> str "") > . str "\n\n" > > where gotos = goto' ! state > > callHappyExpListPerState = str " (happyExpListPerState " > . str (show state) . str ")" > > produceActions (_, LR'Fail{-'-}) = id > produceActions (t, action'@(LR'Reduce{-'-} _ _)) > | action' == default_act = id > | otherwise = producePossiblyFailingAction t action' > produceActions (t, action') > = producePossiblyFailingAction t action' > > producePossiblyFailingAction t action' > = actionFunction t > . mkAction action' > . (case action' of > LR'Fail -> str " []" > LR'MustFail -> str " []" > _ -> str "") > . str "\n" > > produceGotos (t, Goto i) > = actionFunction t > . str "happyGoto " . mkActionName i . str "\n" > produceGotos (_, NoGoto) = id > > actionFunction t > = mkActionName state . strspace > . ('(' :) . showInt t > . str ") = " > > default_act = getDefault assocs_acts > > assocs_acts = assocs acts action array indexed by (terminal * last_state) + state > produceActionArray > | ghc > = str "happyActOffsets :: HappyAddr\n" > . str "happyActOffsets = HappyA# \"" --" > . str (checkedHexChars min_off act_offs) > . str "\"#\n\n" --" > > . str "happyGotoOffsets :: HappyAddr\n" > . str "happyGotoOffsets = HappyA# \"" --" > . str (checkedHexChars min_off goto_offs) > . str "\"#\n\n" --" > > . str "happyAdjustOffset :: Happy_GHC_Exts.Int# -> Happy_GHC_Exts.Int#\n" > . str "happyAdjustOffset off = " > . (if length table < 32768 > then str "off" > else str "if happyLt off (" . shows min_off . str "# :: Happy_GHC_Exts.Int#)" > . str " then off Happy_GHC_Exts.+# 65536#" > . str " else off") > . str "\n\n" --" > > . str "happyDefActions :: HappyAddr\n" > . str "happyDefActions = HappyA# \"" --" > . str (hexChars defaults) > . str "\"#\n\n" --" > > . str "happyCheck :: HappyAddr\n" > . str "happyCheck = HappyA# \"" --" > . str (hexChars check) > . str "\"#\n\n" --" > > . str "happyTable :: HappyAddr\n" > . str "happyTable = HappyA# \"" --" > . str (hexChars table) > . str "\"#\n\n" --" > | otherwise > = str "happyActOffsets :: Happy_Data_Array.Array Prelude.Int Prelude.Int\n" > . str "happyActOffsets = Happy_Data_Array.listArray (0," > . shows n_states . str ") ([" > . interleave' "," (map shows act_offs) > . str "\n\t])\n\n" > > . str "happyGotoOffsets :: Happy_Data_Array.Array Prelude.Int Prelude.Int\n" > . str "happyGotoOffsets = Happy_Data_Array.listArray (0," > . shows n_states . str ") ([" > . interleave' "," (map shows goto_offs) > . str "\n\t])\n\n" > > . str "happyAdjustOffset :: Prelude.Int -> Prelude.Int\n" > . str "happyAdjustOffset = Prelude.id\n\n" > > . str "happyDefActions :: Happy_Data_Array.Array Prelude.Int Prelude.Int\n" > . str "happyDefActions = Happy_Data_Array.listArray (0," > . shows n_states . str ") ([" > . interleave' "," (map shows defaults) > . str "\n\t])\n\n" > > . str "happyCheck :: Happy_Data_Array.Array Prelude.Int Prelude.Int\n" > . str "happyCheck = Happy_Data_Array.listArray (0," > . shows table_size . str ") ([" > . interleave' "," (map shows check) > . str "\n\t])\n\n" > > . str "happyTable :: Happy_Data_Array.Array Prelude.Int Prelude.Int\n" > . str "happyTable = Happy_Data_Array.listArray (0," > . shows table_size . str ") ([" > . interleave' "," (map shows table) > . str "\n\t])\n\n" > produceExpListArray > | ghc > = str "happyExpList :: HappyAddr\n" > . str "happyExpList = HappyA# \"" --" > . str (hexChars explist) > . str "\"#\n\n" --" > | otherwise > = str "happyExpList :: Happy_Data_Array.Array Prelude.Int Prelude.Int\n" > . str "happyExpList = Happy_Data_Array.listArray (0," > . shows table_size . str ") ([" > . interleave' "," (map shows explist) > . str "\n\t])\n\n" > (_, last_state) = bounds action > n_states = last_state + 1 > n_terminals = length terms > n_nonterminals = length nonterms - n_starts -- lose %starts > > (act_offs,goto_offs,table,defaults,check,explist,min_off) > = mkTables action goto first_nonterm' fst_term > n_terminals n_nonterminals n_starts (bounds token_names') > > table_size = length table - 1 > > produceReduceArray > = {- str "happyReduceArr :: Array Int a\n" -} > str "happyReduceArr = Happy_Data_Array.array (" > . shows (n_starts :: Int) -- omit the %start reductions > . str ", " > . shows n_rules > . str ") [\n" > . interleave' ",\n" (map reduceArrElem [n_starts..n_rules]) > . str "\n\t]\n\n" > n_rules = length prods - 1 :: Int > showInt i | ghc = shows i . showChar '#' > | otherwise = shows i This lets examples like: data HappyAbsSyn t1 = HappyTerminal ( HaskToken ) | HappyAbsSyn1 ( HaskExp ) | HappyAbsSyn2 ( HaskExp ) | HappyAbsSyn3 t1 *share* the defintion for ( HaskExp ) data HappyAbsSyn t1 = HappyTerminal ( HaskToken ) | HappyAbsSyn1 ( HaskExp ) | HappyAbsSyn3 t1 ... cuting down on the work that the type checker has to do. Note, this *could* introduce lack of polymophism, for types that have alphas in them. Maybe we should outlaw them inside { } > nt_types_index :: Array Int Int > nt_types_index = array (bounds nt_types) > [ (a, fn a b) | (a, b) <- assocs nt_types ] > where > fn n Nothing = n > fn _ (Just a) = fromMaybe (error "can't find an item in list") (lookup a assoc_list) > assoc_list = [ (b,a) | (a, Just b) <- assocs nt_types ] > makeAbsSynCon = mkAbsSynCon nt_types_index > produceIdentityStuff | use_monad = id > | imported_identity' = > str "type HappyIdentity = Identity\n" > . str "happyIdentity = Identity\n" > . str "happyRunIdentity = runIdentity\n\n" > | otherwise = > str "newtype HappyIdentity a = HappyIdentity a\n" > . str "happyIdentity = HappyIdentity\n" > . str "happyRunIdentity (HappyIdentity a) = a\n\n" > . str "instance Prelude.Functor HappyIdentity where\n" > . str " fmap f (HappyIdentity a) = HappyIdentity (f a)\n\n" > . str "instance Applicative HappyIdentity where\n" > . str " pure = HappyIdentity\n" > . str " (<*>) = ap\n" > . str "instance Prelude.Monad HappyIdentity where\n" > . str " return = pure\n" > . str " (HappyIdentity p) >>= q = q p\n\n" MonadStuff: - with no %monad or %lexer: happyThen :: () => HappyIdentity a -> (a -> HappyIdentity b) -> HappyIdentity b happyReturn :: () => a -> HappyIdentity a happyThen1 m k tks = happyThen m (\a -> k a tks) happyReturn1 = \a tks -> happyReturn a - with %monad: happyThen :: CONTEXT => P a -> (a -> P b) -> P b happyReturn :: CONTEXT => a -> P a happyThen1 m k tks = happyThen m (\a -> k a tks) happyReturn1 = \a tks -> happyReturn a - with %monad & %lexer: happyThen :: CONTEXT => P a -> (a -> P b) -> P b happyReturn :: CONTEXT => a -> P a happyThen1 = happyThen happyReturn1 = happyReturn > produceMonadStuff = > str "happyThen :: " . pcont . str " => " . pty > . str " a -> (a -> " . pty > . str " b) -> " . pty . str " b\n" > . str "happyThen = " . brack monad_then . nl > . str "happyReturn :: " . pcont . str " => a -> " . pty . str " a\n" > . str "happyReturn = " . brack monad_return . nl > . case lexer' of > Nothing -> > str "happyThen1 m k tks = (" . str monad_then > . str ") m (\\a -> k a tks)\n" > . str "happyReturn1 :: " . pcont . str " => a -> b -> " . pty . str " a\n" > . str "happyReturn1 = \\a tks -> " . brack monad_return > . str " a\n" > . str "happyError' :: " . str monad_context . str " => ([" > . token > . str "], [Prelude.String]) -> " > . str monad_tycon > . str " a\n" > . str "happyError' = " > . str (if use_monad then "" else "HappyIdentity Prelude.. ") > . errorHandler . str "\n" > _ -> > let > happyParseSig > | target == TargetArrayBased = > str "happyParse :: " . pcont . str " => " . intMaybeHash > . str " -> " . pty . str " " . happyAbsSyn . str "\n" > . str "\n" > | otherwise = id > newTokenSig > | target == TargetArrayBased = > str "happyNewToken :: " . pcont . str " => " . intMaybeHash > . str " -> Happy_IntList -> HappyStk " . happyAbsSyn > . str " -> " . pty . str " " . happyAbsSyn . str"\n" > . str "\n" > | otherwise = id > doActionSig > | target == TargetArrayBased = > str "happyDoAction :: " . pcont . str " => " . intMaybeHash > . str " -> " . str token_type' . str " -> " . intMaybeHash > . str " -> Happy_IntList -> HappyStk " . happyAbsSyn > . str " -> " . pty . str " " . happyAbsSyn . str "\n" > . str "\n" > | otherwise = id > reduceArrSig > | target == TargetArrayBased = > str "happyReduceArr :: " . pcont > . str " => Happy_Data_Array.Array Prelude.Int (" . intMaybeHash > . str " -> " . str token_type' . str " -> " . intMaybeHash > . str " -> Happy_IntList -> HappyStk " . happyAbsSyn > . str " -> " . pty . str " " . happyAbsSyn . str ")\n" > . str "\n" > | otherwise = id in > filterTypeSig (happyParseSig . newTokenSig . doActionSig . reduceArrSig) > . str "happyThen1 :: " . pcont . str " => " . pty > . str " a -> (a -> " . pty > . str " b) -> " . pty . str " b\n" > . str "happyThen1 = happyThen\n" > . str "happyReturn1 :: " . pcont . str " => a -> " . pty . str " a\n" > . str "happyReturn1 = happyReturn\n" > . str "happyError' :: " . str monad_context . str " => (" > . token . str ", [Prelude.String]) -> " > . str monad_tycon > . str " a\n" > . str "happyError' tk = " > . str (if use_monad then "" else "HappyIdentity ") > . errorHandler . str " tk\n" An error handler specified with %error is passed the current token when used with %lexer, but happyError (the old way but kept for compatibility) is not passed the current token. Also, the %errorhandlertype directive determins the API of the provided function. > errorHandler = > case error_handler' of > Just h -> case error_sig' of > ErrorHandlerTypeExpList -> str h > ErrorHandlerTypeDefault -> str "(\\(tokens, _) -> " . str h . str " tokens)" > Nothing -> case lexer' of > Nothing -> str "(\\(tokens, _) -> happyError tokens)" > Just _ -> str "(\\(tokens, explist) -> happyError)" > reduceArrElem n > = str "\t(" . shows n . str " , " > . str "happyReduce_" . shows n . char ')' ----------------------------------------------------------------------------- -- Produce the parser entry and exit points > produceEntries > = interleave "\n\n" (map produceEntry (zip starts' [0..])) > . if null attributes' then id else produceAttrEntries starts' > produceEntry :: ((String, t0, Int, t1), Int) -> String -> String > produceEntry ((name, _start_nonterm, accept_nonterm, _partial), no) > = (if null attributes' then str name else str "do_" . str name) > . maybe_tks > . str " = " > . str unmonad > . str "happySomeParser where\n" > . str " happySomeParser = happyThen (happyParse " > . case target of > TargetHaskell -> str "action_" . shows no > TargetArrayBased > | ghc -> shows no . str "#" > | otherwise -> shows no > . maybe_tks > . str ") " > . brack' (if coerce > then str "\\x -> happyReturn (let {" . mkHappyWrapCon (nt_types ! accept_nonterm) accept_nonterm (str "x'") > . str " = " . mkHappyOut accept_nonterm . str " x} in x')" > else str "\\x -> case x of {HappyAbsSyn" > . shows (nt_types_index ! accept_nonterm) > . str " z -> happyReturn z; _other -> notHappyAtAll }" > ) > where > maybe_tks | isNothing lexer' = str " tks" > | otherwise = id > unmonad | use_monad = "" > | otherwise = "happyRunIdentity " > produceAttrEntries starts'' > = interleave "\n\n" (map f starts'') > where > f = case (use_monad,lexer') of > (True,Just _) -> \(name,_,_,_) -> monadAndLexerAE name > (True,Nothing) -> \(name,_,_,_) -> monadAE name > (False,Just _) -> error "attribute grammars not supported for non-monadic parsers with %lexer" > (False,Nothing)-> \(name,_,_,_) -> regularAE name > > defaultAttr = fst (head attributes') > > monadAndLexerAE name > = str name . str " = " > . str "do { " > . str "f <- do_" . str name . str "; " > . str "let { (conds,attrs) = f happyEmptyAttrs } in do { " > . str "Prelude.sequence_ conds; " > . str "Prelude.return (". str defaultAttr . str " attrs) }}" > monadAE name > = str name . str " toks = " > . str "do { " > . str "f <- do_" . str name . str " toks; " > . str "let { (conds,attrs) = f happyEmptyAttrs } in do { " > . str "Prelude.sequence_ conds; " > . str "Prelude.return (". str defaultAttr . str " attrs) }}" > regularAE name > = str name . str " toks = " > . str "let { " > . str "f = do_" . str name . str " toks; " > . str "(conds,attrs) = f happyEmptyAttrs; " > . str "x = Prelude.foldr Prelude.seq attrs conds; " > . str "} in (". str defaultAttr . str " x)" ---------------------------------------------------------------------------- -- Produce attributes declaration for attribute grammars > produceAttributes :: [(String, String)] -> String -> String -> String > produceAttributes [] _ = id > produceAttributes attrs attributeType > = str "data " . attrHeader . str " = HappyAttributes {" . attributes' . str "}" . nl > . str "happyEmptyAttrs = HappyAttributes {" . attrsErrors . str "}" . nl > where attributes' = foldl1 (\x y -> x . str ", " . y) $ map formatAttribute attrs > formatAttribute (ident,typ) = str ident . str " :: " . str typ > attrsErrors = foldl1 (\x y -> x . str ", " . y) $ map attrError attrs > attrError (ident,_) = str ident . str " = Prelude.error \"invalid reference to attribute '" . str ident . str "'\"" > attrHeader = > case attributeType of > [] -> str "HappyAttributes" > _ -> str attributeType ----------------------------------------------------------------------------- -- Strict or non-strict parser > produceStrict :: Bool -> String -> String > produceStrict strict > | strict = str "happySeq = happyDoSeq\n\n" > | otherwise = str "happySeq = happyDontSeq\n\n" ----------------------------------------------------------------------------- Replace all the $n variables with happy_vars, and return a list of all the vars used in this piece of code. > actionVal :: LRAction -> Int > actionVal (LR'Shift state _) = state + 1 > actionVal (LR'Reduce rule _) = -(rule + 1) > actionVal LR'Accept = -1 > actionVal (LR'Multiple _ a) = actionVal a > actionVal LR'Fail = 0 > actionVal LR'MustFail = 0 > mkAction :: LRAction -> String -> String > mkAction (LR'Shift i _) = str "happyShift " . mkActionName i > mkAction LR'Accept = str "happyAccept" > mkAction LR'Fail = str "happyFail" > mkAction LR'MustFail = str "happyFail" > mkAction (LR'Reduce i _) = str "happyReduce_" . shows i > mkAction (LR'Multiple _ a) = mkAction a > mkActionName :: Int -> String -> String > mkActionName i = str "action_" . shows i See notes under "Action Tables" above for some subtleties in this function. > getDefault :: [(Name, LRAction)] -> LRAction > getDefault actions = > -- pick out the action for the error token, if any > case [ act | (e, act) <- actions, e == errorTok ] of > > -- use error reduction as the default action, if there is one. > act@(LR'Reduce _ _) : _ -> act > act@(LR'Multiple _ (LR'Reduce _ _)) : _ -> act > > -- if the error token is shifted or otherwise, don't generate > -- a default action. This is *important*! > (act : _) | act /= LR'Fail -> LR'Fail > > -- no error actions, pick a reduce to be the default. > _ -> case reduces of > [] -> LR'Fail > (act:_) -> act -- pick the first one we see for now > > where reduces > = [ act | (_, act@(LR'Reduce _ _)) <- actions ] > ++ [ act | (_, LR'Multiple _ act@(LR'Reduce _ _)) <- actions ] ----------------------------------------------------------------------------- -- Generate packed parsing tables. -- happyActOff ! state -- Offset within happyTable of actions for state -- happyGotoOff ! state -- Offset within happyTable of gotos for state -- happyTable -- Combined action/goto table -- happyDefAction ! state -- Default action for state -- happyCheck -- Indicates whether we should use the default action for state -- the table is laid out such that the action for a given state & token -- can be found by: -- -- off = happyActOff ! state -- off_i = off + token -- check | off_i => 0 = (happyCheck ! off_i) == token -- | otherwise = False -- action | check = happyTable ! off_i -- | otherwise = happyDefAaction ! off_i -- figure out the default action for each state. This will leave some -- states with no *real* actions left. -- for each state with one or more real actions, sort states by -- width/spread of tokens with real actions, then by number of -- elements with actions, so we get the widest/densest states -- first. (I guess the rationale here is that we can use the -- thin/sparse states to fill in the holes later, and also we -- have to do less searching for the more complicated cases). -- try to pair up states with identical sets of real actions. -- try to fit the actions into the check table, using the ordering -- from above. > mkTables > :: ActionTable -> GotoTable -> Name -> Int -> Int -> Int -> Int -> (Int, Int) -> > ( [Int] -- happyActOffsets > , [Int] -- happyGotoOffsets > , [Int] -- happyTable > , [Int] -- happyDefAction > , [Int] -- happyCheck > , [Int] -- happyExpList > , Int -- happyMinOffset > ) > > mkTables action goto first_nonterm' fst_term > n_terminals n_nonterminals n_starts > token_names_bound > > = ( elems act_offs > , elems goto_offs > , take max_off (elems table) > , def_actions > , take max_off (elems check) > , elems explist > , min_off > ) > where > > (table,check,act_offs,goto_offs,explist,min_off,max_off) > = runST (genTables (length actions) > max_token token_names_bound > sorted_actions explist_actions) > > -- the maximum token number used in the parser > max_token = max n_terminals (n_starts+n_nonterminals) - 1 > > def_actions = map (\(_,_,def,_,_,_) -> def) actions > > actions :: [TableEntry] > actions = > [ (ActionEntry, > state, > actionVal default_act, > if null acts'' then 0 > else fst (last acts'') - fst (head acts''), > length acts'', > acts'') > | (state, acts) <- assocs action, > let (err:_dummy:vec) = assocs acts > vec' = drop (n_starts+n_nonterminals) vec > acts' = filter notFail (err:vec') > default_act = getDefault acts' > acts'' = mkActVals acts' default_act > ] > > explist_actions :: [(Int, [Int])] > explist_actions = [ (state, concatMap f $ assocs acts) > | (state, acts) <- assocs action ] > where > f (t, LR'Shift _ _ ) = [t - fst token_names_bound] > f (_, _) = [] > > -- adjust terminals by -(fst_term+1), so they start at 1 (error is 0). > -- (see ARRAY_NOTES) > adjust token | token == errorTok = 0 > | otherwise = token - fst_term + 1 > > mkActVals assocs' default_act = > [ (adjust token, actionVal act) > | (token, act) <- assocs' > , act /= default_act ] > > gotos :: [TableEntry] > gotos = [ (GotoEntry, > state, 0, > if null goto_vals then 0 > else fst (last goto_vals) - fst (head goto_vals), > length goto_vals, > goto_vals > ) > | (state, goto_arr) <- assocs goto, > let goto_vals = mkGotoVals (assocs goto_arr) > ] > > -- adjust nonterminals by -first_nonterm', so they start at zero > -- (see ARRAY_NOTES) > mkGotoVals assocs' = > [ (token - first_nonterm', i) | (token, Goto i) <- assocs' ] > > sorted_actions = sortBy (flip cmp_state) (actions ++ gotos) > cmp_state (_,_,_,width1,tally1,_) (_,_,_,width2,tally2,_) > | width1 < width2 = LT > | width1 == width2 = compare tally1 tally2 > | otherwise = GT > data ActionOrGoto = ActionEntry | GotoEntry > type TableEntry = ( ActionOrGoto > , Int {-stateno-} > , Int {-default-} > , Int {-width-} > , Int {-tally-} > , [(Int,Int)] > ) > genTables > :: Int -- number of actions > -> Int -- maximum token no. > -> (Int, Int) -- token names bounds > -> [TableEntry] -- entries for the table > -> [(Int, [Int])] -- expected tokens lists > -> ST s ( UArray Int Int -- table > , UArray Int Int -- check > , UArray Int Int -- action offsets > , UArray Int Int -- goto offsets > , UArray Int Int -- expected tokens list > , Int -- lowest offset in table > , Int -- highest offset in table > ) > > genTables n_actions max_token token_names_bound entries explist = do > > table <- newArray (0, mAX_TABLE_SIZE) 0 > check <- newArray (0, mAX_TABLE_SIZE) (-1) > act_offs <- newArray (0, n_actions) 0 > goto_offs <- newArray (0, n_actions) 0 > off_arr <- newArray (-max_token, mAX_TABLE_SIZE) 0 > exp_array <- newArray (0, (n_actions * n_token_names + 15) `div` 16) 0 > > (min_off,max_off) <- genTables' table check act_offs goto_offs off_arr exp_array entries > explist max_token n_token_names > > table' <- freeze table > check' <- freeze check > act_offs' <- freeze act_offs > goto_offs' <- freeze goto_offs > exp_array' <- freeze exp_array > return (table',check',act_offs',goto_offs',exp_array',min_off,max_off+1) > where > n_states = n_actions - 1 > mAX_TABLE_SIZE = n_states * (max_token + 1) > (first_token, last') = token_names_bound > n_token_names = last' - first_token + 1 > genTables' > :: STUArray s Int Int -- table > -> STUArray s Int Int -- check > -> STUArray s Int Int -- action offsets > -> STUArray s Int Int -- goto offsets > -> STUArray s Int Int -- offset array > -> STUArray s Int Int -- expected token list > -> [TableEntry] -- entries for the table > -> [(Int, [Int])] -- expected tokens lists > -> Int -- maximum token no. > -> Int -- number of token names > -> ST s (Int,Int) -- lowest and highest offsets in table > > genTables' table check act_offs goto_offs off_arr exp_array entries > explist max_token n_token_names > = fill_exp_array >> fit_all entries 0 0 1 > where > > fit_all [] min_off max_off _ = return (min_off, max_off) > fit_all (s:ss) min_off max_off fst_zero = do > (off, new_min_off, new_max_off, new_fst_zero) <- fit s min_off max_off fst_zero > ss' <- same_states s ss off > writeArray off_arr off 1 > fit_all ss' new_min_off new_max_off new_fst_zero > > fill_exp_array = > forM_ explist $ \(state, tokens) -> > forM_ tokens $ \token -> do > let bit_nr = state * n_token_names + token > let word_nr = bit_nr `div` 16 > let word_offset = bit_nr `mod` 16 > x <- readArray exp_array word_nr > writeArray exp_array word_nr (setBit x word_offset) > > -- try to merge identical states. We only try the next state(s) > -- in the list, but the list is kind-of sorted so we shouldn't > -- miss too many. > same_states _ [] _ = return [] > same_states s@(_,_,_,_,_,acts) ss@((e,no,_,_,_,acts'):ss') off > | acts == acts' = do writeArray (which_off e) no off > same_states s ss' off > | otherwise = return ss > > which_off ActionEntry = act_offs > which_off GotoEntry = goto_offs > > -- fit a vector into the table. Return the offset of the vector, > -- the maximum offset used in the table, and the offset of the first > -- entry in the table (used to speed up the lookups a bit). > fit (_,_,_,_,_,[]) min_off max_off fst_zero = return (0,min_off,max_off,fst_zero) > > fit (act_or_goto, state_no, _deflt, _, _, state@((t,_):_)) > min_off max_off fst_zero = do > -- start at offset 1 in the table: all the empty states > -- (states with just a default reduction) are mapped to > -- offset zero. > off <- findFreeOffset (-t+fst_zero) check off_arr state > let new_min_off | furthest_left < min_off = furthest_left > | otherwise = min_off > new_max_off | furthest_right > max_off = furthest_right > | otherwise = max_off > furthest_left = off > furthest_right = off + max_token > > -- trace ("fit: state " ++ show state_no ++ ", off " ++ show off ++ ", elems " ++ show state) $ do > > writeArray (which_off act_or_goto) state_no off > addState off table check state > new_fst_zero <- findFstFreeSlot check fst_zero > return (off, new_min_off, new_max_off, new_fst_zero) When looking for a free offest in the table, we use the 'check' table rather than the main table. The check table starts off with (-1) in every slot, because that's the only thing that doesn't overlap with any tokens (non-terminals start at 0, terminals start at 1). Because we use 0 for LR'MustFail as well as LR'Fail, we can't check for free offsets in the main table because we can't tell whether a slot is free or not. > -- Find a valid offset in the table for this state. > findFreeOffset :: Int -> STUArray s Int Int -> STUArray s Int Int -> [(Int, Int)] -> ST s Int > findFreeOffset off table off_arr state = do > -- offset 0 isn't allowed > if off == 0 then try_next else do > > -- don't use an offset we've used before > b <- readArray off_arr off > if b /= 0 then try_next else do > > -- check whether the actions for this state fit in the table > ok <- fits off state table > if not ok then try_next else return off > where > try_next = findFreeOffset (off+1) table off_arr state > fits :: Int -> [(Int,Int)] -> STUArray s Int Int -> ST s Bool > fits _ [] _ = return True > fits off ((t,_):rest) table = do > i <- readArray table (off+t) > if i /= -1 then return False > else fits off rest table > addState :: Int -> STUArray s Int Int -> STUArray s Int Int -> [(Int, Int)] > -> ST s () > addState _ _ _ [] = return () > addState off table check ((t,val):state) = do > writeArray table (off+t) val > writeArray check (off+t) t > addState off table check state > notFail :: (Int, LRAction) -> Bool > notFail (_, LR'Fail) = False > notFail _ = True > findFstFreeSlot :: STUArray s Int Int -> Int -> ST s Int > findFstFreeSlot table n = do > i <- readArray table n > if i == -1 then return n > else findFstFreeSlot table (n+1) ----------------------------------------------------------------------------- -- Misc. > comment :: String > comment = > "-- parser produced by Happy Version " ++ showVersion version ++ "\n\n" > mkAbsSynCon :: Array Int Int -> Int -> String -> String > mkAbsSynCon fx t = str "HappyAbsSyn" . shows (fx ! t) > mkHappyVar, mkReduceFun, mkDummyVar :: Int -> String -> String > mkHappyVar n = str "happy_var_" . shows n > mkReduceFun n = str "happyReduce_" . shows n > mkDummyVar n = str "happy_x_" . shows n > mkHappyWrap :: Int -> String -> String > mkHappyWrap n = str "HappyWrap" . shows n > mkHappyWrapCon :: Maybe a -> Int -> (String -> String) -> String -> String > mkHappyWrapCon Nothing _ s = s > mkHappyWrapCon (Just _) n s = brack' (mkHappyWrap n . strspace . s) > mkHappyIn, mkHappyOut :: Int -> String -> String > mkHappyIn n = str "happyIn" . shows n > mkHappyOut n = str "happyOut" . shows n > typeParam, typeParamOut :: Int -> Maybe String -> ShowS > typeParam n Nothing = char 't' . shows n > typeParam _ (Just ty) = brack ty > typeParamOut n Nothing = char 't' . shows n > typeParamOut n (Just _) = mkHappyWrap n > specReduceFun :: Int -> Bool > specReduceFun = (<= 3) ----------------------------------------------------------------------------- -- Convert an integer to a 16-bit number encoded in \xNN\xNN format suitable -- for placing in a string. > hexChars :: [Int] -> String > hexChars = concatMap hexChar > hexChar :: Int -> String > hexChar i | i < 0 = hexChar (i + 65536) > hexChar i = toHex (i `mod` 256) ++ toHex (i `div` 256) > toHex :: Int -> String > toHex i = ['\\','x', hexDig (i `div` 16), hexDig (i `mod` 16)] > hexDig :: Int -> Char > hexDig i | i <= 9 = chr (i + ord '0') > | otherwise = chr (i - 10 + ord 'a') This guards against integers that are so large as to (when converted using 'hexChar') wrap around the maximum value of 16-bit numbers and then end up larger than an expected minimum value. > checkedHexChars :: Int -> [Int] -> String > checkedHexChars minValue = concatMap hexChar' > where hexChar' i | checkHexChar minValue i = hexChar i > | otherwise = error "grammar does not fit in 16-bit representation that is used with '--ghc'" > checkHexChar :: Int -> Int -> Bool > checkHexChar minValue i = i <= 32767 || i - 65536 < minValue happy-1.20.1.1/src/ProduceGLRCode.lhs0000644000000000000000000006554007346545000015316 0ustar0000000000000000Module for producing GLR (Tomita) parsing code. This module is designed as an extension to the Haskell parser generator Happy. (c) University of Durham, Ben Medlock 2001 -- initial code, for structure parsing (c) University of Durham, Paul Callaghan 2004 -- extension to semantic rules, and various optimisations %----------------------------------------------------------------------------- > module ProduceGLRCode ( produceGLRParser > , DecodeOption(..) > , FilterOption(..) > , GhcExts(..) > , Options > ) where > import Paths_happy ( version ) > import GenUtils ( mapDollarDollar ) > import GenUtils ( str, char, nl, brack, brack', interleave, maybestr ) > import Grammar > import Data.Array > import Data.Char ( isSpace, isAlphaNum ) > import Data.List ( nub, (\\), sort, find, tails ) > import Data.Version ( showVersion ) %----------------------------------------------------------------------------- File and Function Names > base_template, lib_template :: String -> String > base_template td = td ++ "/GLR_Base" -- NB Happy uses / too > lib_template td = td ++ "/GLR_Lib" -- Windows accepts this? --- prefix for production names, to avoid name clashes > prefix :: String > prefix = "G_" %----------------------------------------------------------------------------- This type represents choice of decoding style for the result > data DecodeOption > = TreeDecode > | LabelDecode --- This type represents whether filtering done or not > data FilterOption > = NoFiltering > | UseFiltering --- This type represents whether GHC extensions are used or not - extra values are imports and ghc options reqd > data GhcExts > = NoGhcExts > | UseGhcExts String String -- imports and options --- this is where the exts matter > show_st :: GhcExts -> {-State-}Int -> String > show_st UseGhcExts{} = (++"#") . show > show_st NoGhcExts = show --- > type DebugMode = Bool > type Options = (DecodeOption, FilterOption, GhcExts) %----------------------------------------------------------------------------- Main exported function > produceGLRParser > :: FilePath -- Output file name > -> String -- Templates directory > -> ActionTable -- LR tables > -> GotoTable -- LR tables > -> Maybe String -- Module header > -> Maybe String -- User-defined stuff (token DT, lexer etc.) > -> (DebugMode,Options) -- selecting code-gen style > -> Grammar -- Happy Grammar > -> IO () > produceGLRParser outfilename template_dir action goto header trailer options g > = do > let basename = takeWhile (/='.') outfilename > let tbls = (action,goto) > (parseName,_,_,_) <- case starts g of > [s] -> return s > s:_ -> do > putStrLn "GLR-Happy doesn't support multiple start points (yet)" > putStrLn "Defaulting to first start point." > return s > [] -> error "produceGLRParser: []" > mkFiles basename tbls parseName template_dir header trailer options g %----------------------------------------------------------------------------- "mkFiles" generates the files containing the Tomita parsing code. It produces two files - one for the data (small template), and one for the driver and data strs (large template). > mkFiles :: FilePath -- Root of Output file name > -> (ActionTable > ,GotoTable) -- LR tables > -> String -- Start parse function name > -> String -- Templates directory > -> Maybe String -- Module header > -> Maybe String -- User-defined stuff (token DT, lexer etc.) > -> (DebugMode,Options) -- selecting code-gen style > -> Grammar -- Happy Grammar > -> IO () > > mkFiles basename tables start templdir header trailer (debug,options) g > = do > let debug_ext = if debug then "-debug" else "" > let (ext,imps,opts) = case ghcExts_opt of > UseGhcExts is os -> ("-ghc", is, os) > _ -> ("", "", "") > base <- readFile (base_template templdir) > --writeFile (basename ++ ".si") (unlines $ map show sem_info) > writeFile (basename ++ "Data.hs") (content base opts $ "") > lib <- readFile (lib_template templdir ++ ext ++ debug_ext) > writeFile (basename ++ ".hs") (lib_content imps opts lib) > where > (_,_,ghcExts_opt) = options Extract the module name from the given module declaration, if it exists. > m_mod_decl = find isModKW . zip [0..] . tails . (' ':) =<< header > isModKW (_, c0:'m':'o':'d':'u':'l':'e':c1:_) = not (validIDChar c0 || validIDChar c1) > isModKW _ = False > validIDChar c = isAlphaNum c || c `elem` "_'" > validModNameChar c = validIDChar c || c == '.' > data_mod = mod_name ++ "Data" > mod_name = case m_mod_decl of > Just (_, md) -> takeWhile validModNameChar (dropWhile (not . validModNameChar) (drop 8 md)) Or use a default based upon the filename (original behaviour). > Nothing -> reverse . takeWhile (`notElem` "\\/") $ reverse basename Remove the module declaration from the header so that the remainder of the header can be used in the generated code. > header_sans_mod = flip (maybe header) m_mod_decl $ \ (mi, _) -> do > hdr <- header Extract the string that comes before the module declaration... > let (before, mod_decl) = splitAt mi hdr > let isWhereKW (c0:'w':'h':'e':'r':'e':c1:_) = not (validIDChar c0 || validIDChar c1) > isWhereKW _ = False > let where_after = dropWhile (not . isWhereKW) . tails . (++ "\n") $ mod_decl > let after = drop 6 . concat . take 1 $ where_after ...and combine it with the string that comes after the 'where' keyword. > return $ before ++ "\n" ++ after > (sem_def, sem_info) = mkGSemType options g > table_text = mkTbls tables sem_info (ghcExts_opt) g > header_parts = fmap (span (\x -> take 3 (dropWhile isSpace x) == "{-#") > . lines) > header_sans_mod > -- Split off initial options, if they are present > -- Assume these options ONLY related to code which is in > -- parser tail or in sem. rules > content base_defs opts > = str ("{-# OPTIONS " ++ opts ++ " #-}") .nl > . str (unlines $ maybe [] fst header_parts) .nl > . nl > . str (comment "data") .nl .nl > . str ("module " ++ data_mod ++ " where") .nl > . nl > . maybestr (fmap (unlines.snd) header_parts) .nl > . nl > . str base_defs .nl > . nl > . let count_nls = length . filter (=='\n') > pre_trailer = maybe 0 count_nls header_sans_mod -- check fmt below > + count_nls base_defs > + 10 -- for the other stuff > post_trailer = pre_trailer + maybe 0 count_nls trailer + 4 > in > str ("{-# LINE " ++ show pre_trailer ++ " " > ++ show (basename ++ "Data.hs") ++ "#-}") > -- This should show a location in basename.y -- but Happy > -- doesn't pass this info through. But we still avoid being > -- told a location in GLR_Base! > . nl > . nl > . maybestr trailer > .nl > .nl > . str ("{-# LINE " ++ show post_trailer ++ " " > ++ show (basename ++ "Data.hs") ++ "#-}") > . nl > . nl > . mkGSymbols g .nl > . nl > . sem_def .nl > . nl > . mkSemObjects options (monad_sub g) sem_info .nl > . nl > . mkDecodeUtils options (monad_sub g) sem_info .nl > . nl > . user_def_token_code (token_type g) .nl > . nl > . table_text > lib_content imps opts lib_text > = let (pre,_drop_me : post) = break (== "fakeimport DATA") $ lines lib_text > in > unlines [ "{-# OPTIONS " ++ opts ++ " #-}\n" > , comment "driver" ++ "\n" > , "module " ++ mod_name ++ "(" > , case lexer g of > Nothing -> "" > Just (lf,_) -> " " ++ lf ++ "," > , " " ++ start > , "" > , unlines pre > , imps > , "import " ++ data_mod > , start ++ " = glr_parse " > , "use_filtering = " ++ show use_filtering > , "top_symbol = " ++ prefix ++ start_prod > , unlines post > ] > start_prod = token_names g ! (let (_,_,i,_) = head $ starts g in i) > use_filtering = case options of (_, UseFiltering,_) -> True > _ -> False > comment :: String -> String > comment which > = "-- parser (" ++ which ++ ") produced by Happy (GLR) Version " ++ > showVersion version > user_def_token_code :: String -> String -> String > user_def_token_code tokenType > = str "type UserDefTok = " . str tokenType . nl > . str "instance TreeDecode " . brack tokenType . str " where" . nl > . str " decode_b f (Branch (SemTok t) []) = [happy_return t]" . nl > . str "instance LabelDecode " . brack tokenType . str " where" . nl > . str " unpack (SemTok t) = t" . nl %----------------------------------------------------------------------------- Formats the tables as code. > mkTbls :: (ActionTable -- Action table from Happy > ,GotoTable) -- Goto table from Happy > -> SemInfo -- info about production mapping > -> GhcExts -- Use unboxed values? > -> Grammar -- Happy Grammar > -> ShowS > > mkTbls (action,goto) sem_info exts g > = let gsMap = mkGSymMap g > semfn_map = mk_semfn_map sem_info > in > writeActionTbl action gsMap (semfn_map !) exts g > . writeGotoTbl goto gsMap exts %----------------------------------------------------------------------------- Create a mapping of Happy grammar symbol integers to the data representation that will be used for them in the GLR parser. > mkGSymMap :: Grammar -> [(Name,String)] > mkGSymMap g > = [ -- (errorTok, prefix ++ "Error") > ] > ++ [ (i, prefix ++ (token_names g) ! i) > | i <- user_non_terminals g ] -- Non-terminals > ++ [ (i, "HappyTok (" ++ mkMatch tok ++ ")") > | (i,tok) <- token_specs g ] -- Tokens (terminals) > ++ [(eof_term g,"HappyEOF")] -- EOF symbol (internal terminal) > where > mkMatch tok = case mapDollarDollar tok of > Nothing -> tok > Just fn -> fn "_" > toGSym :: [(Int, String)] -> Int -> String > toGSym gsMap i > = case lookup i gsMap of > Nothing -> error $ "No representation for symbol " ++ show i > Just g -> g %----------------------------------------------------------------------------- Take the ActionTable from Happy and turn it into a String representing a function that can be included as the action table in the GLR parser. It also shares identical reduction values as CAFs > writeActionTbl > :: ActionTable -> [(Int,String)] -> (Name->String) > -> GhcExts -> Grammar -> ShowS > writeActionTbl acTbl gsMap semfn_map exts g > = interleave "\n" > $ map str > $ mkLines ++ [errorLine] ++ mkReductions > where > name = "action" > mkLines = concatMap (mkState) (assocs acTbl) > errorLine = name ++ " _ _ = Error" > mkState (i,arr) > = filter (/="") $ map (mkLine i) (assocs arr) > > mkLine state (symInt,action) > | symInt == errorTok -- skip error productions > = "" -- NB see ProduceCode's handling of these > | otherwise > = case action of > LR'Fail -> "" > LR'MustFail -> "" > _ -> unwords [ startLine , mkAct action ] > where > startLine > = unwords [ name , show_st exts state, "(" , getTok , ") =" ] > getTok = let tok = toGSym gsMap symInt > in case mapDollarDollar tok of > Nothing -> tok > Just f -> f "_" > mkAct act > = case act of > LR'Shift newSt _ -> "Shift " ++ show newSt ++ " []" > LR'Reduce r _ -> "Reduce " ++ "[" ++ mkRed r ++ "]" > LR'Accept -> "Accept" > LR'Multiple rs (LR'Shift st _) > -> "Shift " ++ show st ++ " " ++ mkReds rs > LR'Multiple rs r@(LR'Reduce{}) > -> "Reduce " ++ mkReds (r:rs) > _ -> error "writeActionTbl/mkAct: Unhandled case" > where > mkReds rs = "[" ++ tail (concat [ "," ++ mkRed r | LR'Reduce r _ <- rs ]) ++ "]" > mkRed r = "red_" ++ show r > mkReductions = [ mkRedDefn p > | p@(_, Production n _ _ _) <- zip [0..] $ productions g > , n `notElem` start_productions g ] > mkRedDefn (r, Production lhs_id rhs_ids (_code,_dollar_vars) _) > = mkRed r ++ " = ("++ lhs ++ "," ++ show arity ++ " :: Int," ++ sem ++")" > where > lhs = toGSym gsMap $ lhs_id > arity = length rhs_ids > sem = semfn_map r %----------------------------------------------------------------------------- Do the same with the Happy goto table. > writeGotoTbl :: GotoTable -> [(Int,String)] -> GhcExts -> ShowS > writeGotoTbl goTbl gsMap exts > = interleave "\n" (map str $ filter (not.null) mkLines) > . str errorLine . nl > where > name = "goto" > errorLine = "goto _ _ = " ++ show_st exts (negate 1) > mkLines = map mkState (assocs goTbl) > > mkState (i,arr) > = unlines $ filter (/="") $ map (mkLine i) (assocs arr) > > mkLine state (ntInt,goto) > = case goto of > NoGoto -> "" > Goto st -> unwords [ startLine , show_st exts st ] > where > startLine > = unwords [ name , show_st exts state, getGSym , "=" ] > getGSym = toGSym gsMap ntInt %----------------------------------------------------------------------------- Create the 'GSymbol' ADT for the symbols in the grammar > mkGSymbols :: Grammar -> ShowS > mkGSymbols g > = str dec > . str eof > . str tok > . interleave "\n" [ str " | " . str prefix . str sym . str " " > | sym <- syms ] > . str der > -- ++ eq_inst > -- ++ ord_inst > where > dec = "data GSymbol" > eof = " = HappyEOF" > tok = " | HappyTok {-!Int-} (" ++ token_type g ++ ")" > der = " deriving (Show,Eq,Ord)" > syms = [ token_names g ! i | i <- user_non_terminals g ] NOTES: Was considering avoiding use of Eq/Ord over tokens, but this then means hand-coding the Eq/Ord classes since we're over-riding the usual order except in one case. maybe possible to form a union and do some juggling, but this isn't that easy, eg input type of "action". plus, issues about how token info gets into TreeDecode sem values - which might be tricky to arrange. <> eq_inst = "instance Eq GSymbol where" <> : " HappyTok i _ == HappyTok j _ = i == j" <> : [ " i == j = fromEnum i == fromEnum j" %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Semantic actions on rules. These are stored in a union type "GSem", and the semantic values are held on the branches created at the appropriate reduction. "GSem" type has one constructor per distinct type of semantic action and pattern of child usage. %----------------------------------------------------------------------------- Creating a type for storing semantic rules - also collects information on code structure and constructor names, for use in later stages. > type SemInfo > = [(String, String, [Int], [((Int,Int), ([(Int,String)],String), [Int])])] > mkGSemType :: Options -> Grammar -> (ShowS, SemInfo) > mkGSemType (TreeDecode,_,_) g > = (def, map snd syms) > where > mtype s = case monad_sub g of > Nothing -> s > Just (ty,_,_) -> ty ++ ' ' : brack s "" > def = str "data GSem" . nl > . str " = NoSem" . nl > . str (" | SemTok (" ++ token_type g ++ ")") . nl > . interleave "\n" [ str " | " . str sym . str " " > | sym <- map fst syms ] > . str "instance Show GSem where" . nl > . interleave "\n" [ str " show " . str c . str "{} = " . str (show c) > | (_,c,_,_) <- map snd syms ] > syms = [ (c_name ++ " (" ++ ty ++ ")", (rty, c_name, mask, prod_info)) > | (i,this@(mask,args,rty)) <- zip [0..] (nub $ map fst info) > -- find unique types (plus mask) > , let c_name = "Sem_" ++ show i > , let mrty = mtype rty > , let ty = foldr (\l r -> l ++ " -> " ++ r) mrty args > , let code_info = [ j_code | (that, j_code) <- info, this == that ] > , let prod_info = [ ((i,k), code, js) > | (k,code) <- zip [0..] (nub $ map snd code_info) > , let js = [ j | (j,code2) <- code_info > , code == code2 ] > ] > -- collect specific info about productions with this type > ] > info = [ ((var_mask, args, i_ty), (j,(ts_pats,code))) > | i <- user_non_terminals g > , let i_ty = typeOf i > , j <- lookupProdsOfName g i -- all prod numbers > , let Production _ ts (raw_code,dollar_vars) _ = lookupProdNo g j > , let var_mask = map (\x -> x - 1) vars_used > where vars_used = sort $ nub dollar_vars > , let args = [ typeOf $ ts !! v | v <- var_mask ] > , let code | all isSpace raw_code = "()" > | otherwise = raw_code > , let ts_pats = [ (k+1,c) | k <- var_mask > , (t,c) <- token_specs g > , ts !! k == t ] > ] > typeOf n | n `elem` terminals g = token_type g > | otherwise = case types g ! n of > Nothing -> "()" -- default > Just t -> t > -- NB expects that such labels are Showable > mkGSemType (LabelDecode,_,_) g > = (def, map snd syms) > where > def = str "data GSem" . nl > . str " = NoSem" . nl > . str (" | SemTok (" ++ token_type g ++ ")") > . interleave "\n" [ str " | " . str sym . str " " > | sym <- map fst syms ] > . str " deriving (Show)" . nl > syms = [ (c_name ++ " (" ++ ty ++ ")", (ty, c_name, mask, prod_info)) > | (i,this@(mask,ty)) <- zip [0..] (nub $ map fst info) > -- find unique types > , let c_name = "Sem_" ++ show i > , let code_info = [ j_code | (that, j_code) <- info, this == that ] > , let prod_info = [ ((i,k), code, js) > | (k,code) <- zip [0..] (nub $ map snd code_info) > , let js = [ j | (j,code2) <- code_info > , code == code2 ] > ] > -- collect specific info about productions with this type > ] > info = [ ((var_mask,i_ty), (j,(ts_pats,code))) > | i <- user_non_terminals g > , let i_ty = typeOf i > , j <- lookupProdsOfName g i -- all prod numbers > , let Production _ ts (code,dollar_vars) _ = lookupProdNo g j > , let var_mask = map (\x -> x - 1) vars_used > where vars_used = sort $ nub dollar_vars > , let ts_pats = [ (k+1,c) | k <- var_mask > , (t,c) <- token_specs g > , ts !! k == t ] > ] > typeOf n = case types g ! n of > Nothing -> "()" -- default > Just t -> t %--------------------------------------- Creates the appropriate semantic values. - for label-decode, these are the code, but abstracted over the child indices - for tree-decode, these are the code abstracted over the children's values > mkSemObjects :: Options -> MonadInfo -> SemInfo -> ShowS > mkSemObjects (LabelDecode,filter_opt,_) _ sem_info > = interleave "\n" > $ [ str (mkSemFn_Name ij) > . str (" ns@(" ++ pat ++ "happy_rest) = ") > . str (" Branch (" ++ c_name ++ " (" ++ code ++ ")) ") > . str (nodes filter_opt) > | (_ty, c_name, mask, prod_info) <- sem_info > , (ij, (pats,code), _ps) <- prod_info > , let pat | null mask = "" > | otherwise = concatMap (\v -> mk_tok_binder pats (v+1) ++ ":") > [0..maximum mask] > , let nodes NoFiltering = "ns" > nodes UseFiltering = "(" ++ foldr (\l -> mkHappyVar (l+1) . showChar ':') "[])" mask > ] > where > mk_tok_binder pats v > = mk_binder (\s -> "(_,_,HappyTok (" ++ s ++ "))") pats v "" > mkSemObjects (TreeDecode,filter_opt,_) monad_info sem_info > = interleave "\n" > $ [ str (mkSemFn_Name ij) > . str (" ns@(" ++ pat ++ "happy_rest) = ") > . str (" Branch (" ++ c_name ++ " (" ++ sem ++ ")) ") > . str (nodes filter_opt) > | (_ty, c_name, mask, prod_info) <- sem_info > , (ij, (pats,code), _) <- prod_info > , let indent c = init $ unlines $ map (replicate 4 ' '++) $ lines c > , let mcode = case monad_info of > Nothing -> code > Just (_,_,rtn) -> case code of > '%':code' -> "\n" ++ indent code' > _ -> rtn ++ " (" ++ code ++ ")" > , let sem = foldr (\v t -> mk_lambda pats (v + 1) "" ++ t) mcode mask > , let pat | null mask = "" > | otherwise = concatMap (\v -> mkHappyVar (v+1) ":") > [0..maximum mask] > , let nodes NoFiltering = "ns" > nodes UseFiltering = "(" ++ foldr (\l -> mkHappyVar (l+1) . showChar ':') "[])" mask > ] > mk_lambda :: [(Int, String)] -> Int -> String -> String > mk_lambda pats v > = (\s -> "\\" ++ s ++ " -> ") . mk_binder id pats v > mk_binder :: (String -> String) -> [(Int, String)] -> Int -> String -> String > mk_binder wrap pats v > = case lookup v pats of > Nothing -> mkHappyVar v > Just p -> case mapDollarDollar p of > Nothing -> wrap . mkHappyVar v . showChar '@' . brack p > Just fn -> wrap . brack' (fn . mkHappyVar v) --- standardise the naming scheme > mkSemFn_Name :: (Int, Int) -> String > mkSemFn_Name (i,j) = "semfn_" ++ show i ++ "_" ++ show j --- maps production name to the underlying (possibly shared) semantic function > mk_semfn_map :: SemInfo -> Array Name String > mk_semfn_map sem_info > = array (0,maximum $ map fst prod_map) prod_map > where > prod_map = [ (p, mkSemFn_Name ij) > | (_,_,_,pi') <- sem_info, (ij,_,ps) <- pi', p <- ps ] %----------------------------------------------------------------------------- Create default decoding functions Idea is that sem rules are stored as functions in the AbsSyn names, and only unpacked when needed. Using classes here to manage the unpacking. > mkDecodeUtils :: Options -> MonadInfo -> SemInfo -> ShowS > mkDecodeUtils (TreeDecode,filter_opt,_) monad_info seminfo > = interleave "\n" > $ map str (monad_defs monad_info) > ++ map mk_inst ty_cs > where > ty_cs = [ (ty, [ (c_name, mask) > | (ty2, c_name, mask, _j_vs) <- seminfo > , ty2 == ty > ]) > | ty <- nub [ ty | (ty,_,_,_) <- seminfo ] > ] -- group by same type > mk_inst (ty, cs_vs) > = str ("instance TreeDecode (" ++ ty ++ ") where ") . nl > . interleave "\n" > [ str " " > . str ("decode_b f (Branch (" ++ c_name ++ " s)") > . str (" (" ++ var_pat ++ ")) = ") > . cross_prod monad_info "s" (nodes filter_opt) > | (c_name, vs) <- cs_vs > , let vars = [ "b_" ++ show n | n <- var_range filter_opt vs ] > , let var_pat = foldr (\l r -> l ++ ":" ++ r) "_" vars > , let nodes NoFiltering = [ vars !! n | n <- vs ] > nodes UseFiltering = vars > ] > var_range _ [] = [] > var_range NoFiltering vs = [0 .. maximum vs ] > var_range UseFiltering vs = [0 .. length vs - 1] > cross_prod Nothing s_var nodes > = cross_prod_ (char '[' . str s_var . char ']') > (map str nodes) > cross_prod (Just (_,_,rtn)) s_var nodes > = str "map happy_join $ " > . cross_prod_ (char '[' . str rtn . char ' ' . str s_var . char ']') > (map str nodes) > cross_prod_ = foldl (\s a -> brack' > $ str "cross_fn" > . char ' ' . s > . str " $ decode f " > . a) > mkDecodeUtils (LabelDecode,_,_) monad_info seminfo > = interleave "\n" > $ map str > $ monad_defs monad_info ++ concatMap (mk_inst) ty_cs > where > ty_cs = [ (ty, [ (c_name, mask) > | (ty2, c_name, mask, _) <- seminfo > , ty2 == ty > ]) > | ty <- nub [ ty | (ty,_,_,_) <- seminfo ] > ] -- group by same type > mk_inst (ty, cns) > = ("instance LabelDecode (" ++ ty ++ ") where ") > : [ " unpack (" ++ c_name ++ " s) = s" > | (c_name, _mask) <- cns ] --- This selects the info used for monadic parser generation > type MonadInfo = Maybe (String,String,String) > monad_sub :: Grammar -> MonadInfo > monad_sub g > = case monad g of > (True, _, ty,bd,ret) -> Just (ty,bd,ret) > _ -> Nothing > -- TMP: only use monad info if it was user-declared, and ignore ctxt > -- TMP: otherwise default to non-monadic code > -- TMP: (NB not sure of consequences of monads-everywhere yet) --- form the various monad-related defs. > monad_defs :: MonadInfo -> [String] > monad_defs Nothing > = [ "type Decode_Result a = a" > , "happy_ap = ($)" > , "happy_return = id"] > monad_defs (Just (ty,tn,rtn)) > = [ "happy_join x = (" ++ tn ++ ") x id" > , "happy_ap f a = (" ++ tn ++ ") f (\\f -> (" ++ tn ++ ") a (\\a -> " ++ rtn ++ "(f a)))" > , "type Decode_Result a = " ++ brack ty " a" > , "happy_return = " ++ rtn ++ " :: a -> Decode_Result a" > ] %----------------------------------------------------------------------------- Util Functions --- remove Happy-generated start symbols. > user_non_terminals :: Grammar -> [Name] > user_non_terminals g > = non_terminals g \\ start_productions g > start_productions :: Grammar -> [Name] > start_productions g = [ s | (_,s,_,_) <- starts g ] --- > mkHappyVar :: Int -> String -> String > mkHappyVar n = str "happy_var_" . shows n happy-1.20.1.1/src/Target.lhs0000644000000000000000000000061607346545000013774 0ustar0000000000000000----------------------------------------------------------------------------- The target data type. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module Target (Target(..)) where > data Target > = TargetHaskell -- functions and things > | TargetArrayBased -- arrays > deriving Eq happy-1.20.1.1/test.hs0000644000000000000000000000016607346545000012562 0ustar0000000000000000import System.Process (system) import System.Exit (exitWith) main = system "make -k -C tests clean all" >>= exitWith happy-1.20.1.1/tests/0000755000000000000000000000000007346545000012406 5ustar0000000000000000happy-1.20.1.1/tests/AttrGrammar001.y0000644000000000000000000000242707346545000015247 0ustar0000000000000000{ import Control.Monad (unless) } %tokentype { Char } %token a { 'a' } %token b { 'b' } %token c { 'c' } %attributetype { Attrs a } %attribute value { a } %attribute len { Int } %name parse abcstring %monad { Maybe } %% abcstring : alist blist clist { $$ = $1 ++ $2 ++ $3 ; $2.len = $1.len ; $3.len = $1.len } alist : a alist { $$ = $1 : $> ; $$.len = $>.len + 1 } | { $$ = []; $$.len = 0 } blist : b blist { $$ = $1 : $> ; $>.len = $$.len - 1 } | { $$ = [] ; where failUnless ($$.len == 0) "blist wrong length" } clist : c clist { $$ = $1 : $> ; $>.len = $$.len - 1 } | { $$ = [] ; where failUnless ($$.len == 0) "clist wrong length" } { happyError = error "parse error" failUnless b msg = unless b (fail msg) main = case parse "" of { Just _ -> case parse "abc" of { Just _ -> case parse "aaaabbbbcccc" of { Just _ -> case parse "abbcc" of { Nothing -> case parse "aabcc" of { Nothing -> case parse "aabbc" of { Nothing -> putStrLn "Test works"; _ -> quit } ; _ -> quit }; _ -> quit }; _ -> quit } ; _ -> quit }; _ -> quit } quit = putStrLn "Test failed" } happy-1.20.1.1/tests/AttrGrammar002.y0000644000000000000000000000227107346545000015245 0ustar0000000000000000 %tokentype { Char } %token minus { '-' } %token plus { '+' } %token one { '1' } %token zero { '0' } %attributetype { Attrs } %attribute value { Integer } %attribute pos { Int } %name parse start %monad { Maybe } %% start : num { $$ = $1 } num : bits { $$ = $1 ; $1.pos = 0 } | plus bits { $$ = $2 ; $2.pos = 0 } | minus bits { $$ = negate $2; $2.pos = 0 } bits : bit { $$ = $1 ; $1.pos = $$.pos } | bits bit { $$ = $1 + $2 ; $1.pos = $$.pos + 1 ; $2.pos = $$.pos } bit : zero { $$ = 0 } | one { $$ = 2^($$.pos) } { happyError msg = fail $ "parse error: "++msg main = case parse "" of { Nothing -> case parse "abc" of { Nothing -> case parse "0" of { Just 0 -> case parse "1" of { Just 1 -> case parse "101" of { Just 5 -> case parse "111" of { Just 7 -> case parse "10001" of { Just 17 -> putStrLn "Test worked"; _ -> quit }; _ -> quit }; _ -> quit }; _ -> quit }; _ -> quit }; _ -> quit }; _ -> quit } quit = putStrLn "Test Failed" } happy-1.20.1.1/tests/Makefile0000644000000000000000000000612607346545000014053 0ustar0000000000000000# NOTE: `cabal test` will take care to build the local `happy` # executable and place it into $PATH for us to pick up. # # If it doesn't look like the alex binary in $PATH comes from the # build tree, then we'll fall back to pointing to # ../dist/build/alex/alex to support running tests via "runghc # Setup.hs test". # HAPPY=$(shell which happy) ifeq "$(filter $(dir $(shell pwd))%,$(HAPPY))" "" HAPPY=../dist/build/happy/happy endif HC = ghc HC_OPTS=-Wall -Werror .PRECIOUS: %.n.hs %.g.hs %.o %.exe %.bin ifeq "$(TARGETPLATFORM)" "i386-unknown-mingw32" HS_PROG_EXT = .exe else HS_PROG_EXT = .bin endif TESTS = Test.ly TestMulti.ly TestPrecedence.ly bug001.ly \ monad001.y monad002.ly precedence001.ly precedence002.y \ bogus-token.y bug002.y Partial.ly \ issue91.y issue93.y issue94.y issue95.y \ AttrGrammar001.y AttrGrammar002.y \ test_rules.y monaderror.y monaderror-explist.y \ typeclass_monad001.y typeclass_monad002.ly typeclass_monad_lexer.y \ rank2.y shift01.y ERROR_TESTS = error001.y # NOTE: `cabal` will set the `happy_datadir` env-var accordingly before invoking the test-suite #TEST_HAPPY_OPTS = --strict --template=.. TEST_HAPPY_OPTS = --strict %.n.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) $< -o $@ %.a.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) -a $< -o $@ %.g.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) -g $< -o $@ %.gc.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) -gc $< -o $@ %.ag.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) -ag $< -o $@ %.agc.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) -agc $< -o $@ %.n.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) $< -o $@ %.a.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) -a $< -o $@ %.g.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) -g $< -o $@ %.gc.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) -gc $< -o $@ %.ag.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) -ag $< -o $@ %.agc.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) -agc $< -o $@ CLEAN_FILES += *.n.hs *.a.hs *.g.hs *.gc.hs *.ag.hs *.agc.hs *.info *.hi *.bin *.exe *.o *.run.stdout *.run.stderr $(SANDBOX_CONFIG) ALL_TEST_HS = $(shell echo $(TESTS) | sed -e 's/\([^\. ]*\)\.\(l\)\{0,1\}y/\1.n.hs \1.a.hs \1.g.hs \1.gc.hs \1.ag.hs \1.agc.hs/g') ALL_TESTS = $(patsubst %.hs, %.run, $(ALL_TEST_HS)) CHECK_ERROR_TESTS = $(patsubst %, check.%, $(ERROR_TESTS)) HC_OPTS += -fforce-recomp .PRECIOUS: %.hs %.o %.bin %.$(HS_PROG_EXT) %.run : %$(HS_PROG_EXT) @echo "--> Checking $<..." ./$< check.%.y : %.y @echo "--> Checking $<..." $(HAPPY) $(TEST_HAPPY_OPTS) $< 1>$*.run.stdout 2>$*.run.stderr || true @diff -u --ignore-all-space $*.stdout $*.run.stdout @diff -u --ignore-all-space $*.stderr $*.run.stderr %$(HS_PROG_EXT) : %.hs $(HC) $(HC_OPTS) $($*_LD_OPTS) $< -o $@ all :: $(SANDBOX_CONFIG) $(CHECK_ERROR_TESTS) $(ALL_TESTS) check-todo:: $(HAPPY) $(TEST_HAPPY_OPTS) -ad Test.ly $(HC) Test.hs -o happy_test ./happy_test -rm -f ./happy_test $(HAPPY) $(TEST_HAPPY_OPTS) -agd Test.ly $(HC) Test.hs -o happy_test ./happy_test -rm -f ./happy_test $(HAPPY) $(TEST_HAPPY_OPTS) -agcd Test.ly $(HC) Test.hs -o happy_test ./happy_test -rm -f ./happy_test .PHONY: clean clean: $(RM) $(CLEAN_FILES) cabal.sandbox.config: cabal sandbox init --sandbox=../.cabal-sandbox happy-1.20.1.1/tests/Partial.ly0000644000000000000000000001013007346545000014343 0ustar0000000000000000This is a simple test for happy. First thing to declare is the name of your parser, and the type of the tokens the parser reads. > { > import Data.Char > } > %name calc Exp > %partial term Term > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } The *new* system. %token let ( let ) in ( in ) int ( digit+ ) var ( {alpha}{alphanum}+ ) '=' ( = ) '+' ( + ) '-' ( - ) '*' ( * ) '/' ( / ) '(' ( \( ) ')' ( \) ) %whitespace ( {space}|{tab} ) %newline ( {newline} ) The left hand side are the names of the terminals or tokens, and the right hand side is how to pattern match them. Like yacc, we include %% here, for no real reason. > %% Now we have the production rules. > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 :: { Exp1 } > Exp1 : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: { Term } > Term : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: { Factor } > Factor : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declair this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError tks = error "Parse error" Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 deriving Show > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term deriving Show > data Term = Times Term Factor | Div Term Factor | Factor Factor deriving Show > data Factor = Int Int | Var String | Brack Exp deriving Show The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. > runCalc :: String -> Exp > runCalc = calc . lexer > runTerm :: String -> Term > runTerm = term . lexer Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> > case runTerm "1 + 2 * 3" of { > Factor (Int 1) -> > case runTerm "1*2+3" of { > Times (Factor (Int 1)) (Int 2) -> > case runTerm "1*2*3" of { > Times (Times (Factor (Int 1)) (Int 2)) (Int 3) -> > print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-1.20.1.1/tests/Test.ly0000644000000000000000000000733607346545000013704 0ustar0000000000000000This is a simple test for happy. First thing to declare is the name of your parser, and the type of the tokens the parser reads. > { > import Data.Char > } > %name calc > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } The *new* system. %token let ( let ) in ( in ) int ( digit+ ) var ( {alpha}{alphanum}+ ) '=' ( = ) '+' ( + ) '-' ( - ) '*' ( * ) '/' ( / ) '(' ( \( ) ')' ( \) ) %whitespace ( {space}|{tab} ) %newline ( {newline} ) The left hand side are the names of the terminals or tokens, and the right hand side is how to pattern match them. Like yacc, we include %% here, for no real reason. > %% Now we have the production rules. > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 :: { Exp1 } > Exp1 : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: { Term } > Term : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: { Factor } > Factor : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declair this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError tks = error "Parse error" Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. > runCalc :: String -> Exp > runCalc = calc . lexer Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-1.20.1.1/tests/TestMulti.ly0000644000000000000000000000775207346545000014721 0ustar0000000000000000This is a simple test for happy. First thing to declare is the name of your parser, and the type of the tokens the parser reads. > { > import Data.Char > } > %name calcExp Exp > %name calcTerm Term > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } The *new* system. %token let ( let ) in ( in ) int ( digit+ ) var ( {alpha}{alphanum}+ ) '=' ( = ) '+' ( + ) '-' ( - ) '*' ( * ) '/' ( / ) '(' ( \( ) ')' ( \) ) %whitespace ( {space}|{tab} ) %newline ( {newline} ) The left hand side are the names of the terminals or tokens, and the right hand side is how to pattern match them. Like yacc, we include %% here, for no real reason. > %% Now we have the production rules. > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 :: { Exp1 } > Exp1 : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: { Term } > Term : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: { Factor } > Factor : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declair this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError tks = error "Parse error" Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. > runCalcExp :: String -> Exp > runCalcExp = calcExp . lexer > runCalcTerm :: String -> Term > runCalcTerm = calcTerm . lexer Here we test our parser. > main = case runCalcExp "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalcExp "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalcExp "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalcExp "let x = 2 in x * (x - 2)" of { > (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> > > case runCalcTerm "2 * (3 + 1)" of { > (Times (Factor (Int 2)) (Brack (Exp1 (Plus (Term (Factor (Int 3))) (Factor (Int 1)))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-1.20.1.1/tests/TestPrecedence.ly0000644000000000000000000000663007346545000015656 0ustar0000000000000000This is a simple test for happy using operator precedence. First thing to declare is the name of your parser, and the type of the tokens the parser reads. > { > import Data.Char > } > %name calc > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '>' { TokenGreater } > '<' { TokenLess } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > UMINUS { TokenFoo } > %nonassoc '>' '<' > %left '+' '-' > %left '*' '/' > %left UMINUS > %% > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp '>' Exp { Greater $1 $3 } > | Exp '<' Exp { Less $1 $3 } > | Exp '+' Exp { Plus $1 $3 } > | Exp '-' Exp { Minus $1 $3 } > | Exp '*' Exp { Times $1 $3 } > | Exp '/' Exp { Div $1 $3 } > | '-' Exp %prec UMINUS { Uminus $2 } > | '(' Exp ')' { Brack $2 } > | int { Int $1 } > | var { Var $1 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declair this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError tks = error "Parse error" Now we declare the datastructure that we are parsing. > data Exp > = Let String Exp Exp > | Greater Exp Exp > | Less Exp Exp > | Plus Exp Exp > | Minus Exp Exp > | Times Exp Exp > | Div Exp Exp > | Uminus Exp > | Brack Exp > | Int Int > | Var String > deriving Show The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenGreater > | TokenLess > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenFoo .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('>':cs) = TokenGreater : lexer cs > lexer ('<':cs) = TokenLess : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. > runCalc :: String -> Exp > runCalc = calc . lexer Here we test our parser. > main = case runCalc "let x = 1 in let y = 2 in x * y + x / y" of { > (Let "x" (Int 1) (Let "y" (Int 2) (Plus (Times (Var "x") (Var "y")) (Div (Var "x") (Var "y"))))) -> > case runCalc "- 1 * - 2 + 3" of { > (Plus (Times (Uminus (Int 1)) (Uminus (Int 2))) (Int 3)) -> > case runCalc "- - - 1 + 2 * 3 - 4" of { > (Minus (Plus (Uminus (Uminus (Uminus (Int 1)))) (Times (Int 2) (Int 3))) (Int 4)) -> > print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } > > quit = print "Test failed\n"; > > } happy-1.20.1.1/tests/bogus-token.y0000644000000000000000000000051507346545000015036 0ustar0000000000000000{ module Main where import Control.Exception as Exception } %tokentype { Token } %token A { A } %name parse %% parse : A { () } { data Token = A | B test1 = parse [B] main = do Exception.try (print test1 >> fail "Test failed.") :: IO (Either ErrorCall ()) putStrLn "Test worked" happyError = error "parse error" } happy-1.20.1.1/tests/bug001.ly0000644000000000000000000000062207346545000013752 0ustar0000000000000000> %name parse > %tokentype { Token } > %token Int { TokenInt } > %% > Expr :: { Int } > Expr : Term { $1 } The constant in the next rule would be defaulted to Integer, but it is forced to Int by the type signature of Expr above. This test exposed a bug in the unsafeCoerce method. > Term : Int { 42 } > { > main = print (parse [TokenInt]) > > data Token = TokenInt > > happyError = error "" > } happy-1.20.1.1/tests/bug002.y0000644000000000000000000000034707346545000013603 0ustar0000000000000000{ module Main where } %name parser %token foo { 1 } %tokentype { Int } %% -- two productions for the same non-terminal should work Foo : {- empty -} { () } Foo : Foo foo { () } { main = return () happyError = undefined } happy-1.20.1.1/tests/error001.stderr0000644000000000000000000000024007346545000015201 0ustar0000000000000000error001.y: Multiple rules for 'foo' error001.y: 8: unknown identifier ''a'' error001.y: 10: unknown identifier ''a'' error001.y: 11: unknown identifier ''b'' happy-1.20.1.1/tests/error001.stdout0000644000000000000000000000000007346545000015212 0ustar0000000000000000happy-1.20.1.1/tests/error001.y0000644000000000000000000000012007346545000014143 0ustar0000000000000000%name foo %tokentype { Token } %% foo : 'a' { } bar : 'a' { } foo : 'b' { } happy-1.20.1.1/tests/issue91.y0000644000000000000000000000110007346545000014072 0ustar0000000000000000-- See for more information %name parse prod %tokentype { Tok } %monad { P } { bindP } { returnP } %error { error "parse error" } %lexer { lexer } { EOF } %token IDENT { Identifier $$ } %% prod :: { () } : IDENT { () } { data Tok = EOF | Identifier String type P a = String -> (a, String) bindP :: P a -> (a -> P b) -> P b bindP p f s = let (x,s') = p s in f x s' returnP :: a -> P a returnP = (,) lexer :: (Tok -> P a) -> P a lexer cont s = cont (case s of { "" -> EOF; _ -> Identifier s }) "" main = pure () } happy-1.20.1.1/tests/issue93.y0000644000000000000000000020731607346545000014115 0ustar0000000000000000-- See for more information -- This is an example of a grammar that has more than 2^15 entries in `happyTable` (39817). { import System.Exit import Data.Char } %name parseLit lit %name parseAttr export_attribute %name parseTy export_ty %name parsePat pat %name parseStmt stmt %name parseExpr expr %name parseItem mod_item %name parseSourceFileContents source_file %name parseBlock export_block %name parseImplItem impl_item %name parseTraitItem trait_item %name parseTt token_tree %name parseTokenStream token_stream %name parseTyParam ty_param %name parseLifetimeDef lifetime_def %name parseWhereClause where_clause %name parseGenerics generics %tokentype { Token } %lexer { lexNonSpace `bindP` } { Eof } %monad { P } { bindP } { returnP } %error { parseError } %expect 0 %token '=' { Equal } '<' { Less } '>' { Greater } '!' { Exclamation } '~' { Tilde } '+' { Plus } '-' { Minus } '*' { Star } '/' { Slash } '%' { Percent } '^' { Caret } '&' { Ampersand } '|' { Pipe } '@' { At } '...' { DotDotDot } '..' { DotDot } '.' { Dot } ',' { Comma } ';' { Semicolon } '::' { ModSep } ':' { Colon } '->' { RArrow } '<-' { LArrow } '=>' { FatArrow } '#' { Pound } '$' { Dollar } '?' { Question } '#!' { Shebang } '||' { PipePipe } '&&' { AmpersandAmpersand } '>=' { GreaterEqual } '>>=' { GreaterGreaterEqual } '<<' { LessLess } '>>' { GreaterGreater } '==' { EqualEqual } '!=' { NotEqual } '<=' { LessEqual } '<<=' { LessLessEqual } '-=' { MinusEqual } '&=' { AmpersandEqual } '|=' { PipeEqual } '+=' { PlusEqual } '*=' { StarEqual } '/=' { SlashEqual } '^=' { CaretEqual } '%=' { PercentEqual } '(' { OpenParen } '[' { OpenBracket } '{' { OpenBrace } ')' { CloseParen } ']' { CloseBracket } '}' { CloseBrace } byte { ByteTok{} } char { CharTok{} } int { IntegerTok{} } float { FloatTok{} } str { StrTok{} } byteStr { ByteStrTok{} } rawStr { StrRawTok{} } rawByteStr { ByteStrRawTok{} } as { IdentTok "as" } box { IdentTok "box" } break { IdentTok "break" } const { IdentTok "const" } continue { IdentTok "continue" } crate { IdentTok "crate" } else { IdentTok "else" } enum { IdentTok "enum" } extern { IdentTok "extern" } false { IdentTok "false" } fn { IdentTok "fn" } for { IdentTok "for" } if { IdentTok "if" } impl { IdentTok "impl" } in { IdentTok "in" } let { IdentTok "let" } loop { IdentTok "loop" } match { IdentTok "match" } mod { IdentTok "mod" } move { IdentTok "move" } mut { IdentTok "mut" } pub { IdentTok "pub" } ref { IdentTok "ref" } return { IdentTok "return" } Self { IdentTok "Self" } self { IdentTok "self" } static { IdentTok "static" } struct { IdentTok "struct" } super { IdentTok "super" } trait { IdentTok "trait" } true { IdentTok "true" } type { IdentTok "type" } unsafe { IdentTok "unsafe" } use { IdentTok "use" } where { IdentTok "where" } while { IdentTok "while" } do { IdentTok "do" } abstract { IdentTok "abstract" } alignof { IdentTok "alignof" } become { IdentTok "become" } final { IdentTok "final" } macro { IdentTok "macro" } offsetof { IdentTok "offsetof" } override { IdentTok "override" } priv { IdentTok "priv" } proc { IdentTok "proc" } pure { IdentTok "pure" } sizeof { IdentTok "sizeof" } typeof { IdentTok "typeof" } unsized { IdentTok "unsized" } virtual { IdentTok "virtual" } yield { IdentTok "yield" } default { IdentTok "default" } union { IdentTok "union" } catch { IdentTok "catch" } outerDoc { OuterDoc } innerDoc { InnerDoc } IDENT { IdentTok{} } '_' { Underscore } LIFETIME { LifetimeTok _ } ntItem { Interpolated 0 } ntBlock { Interpolated 1 } ntStmt { Interpolated 2 } ntPat { Interpolated 3 } ntExpr { Interpolated 4 } ntTy { Interpolated 5 } ntIdent { Interpolated 6 } ntPath { Interpolated 7 } ntTT { Interpolated 8 } ntArm { Interpolated 9 } ntImplItem { Interpolated 10 } ntTraitItem { Interpolated 11 } ntGenerics { Interpolated 12 } ntWhereClause { Interpolated 13 } ntArg { Interpolated 14 } ntLit { Interpolated 15 } %nonassoc SEG %nonassoc mut DEF EQ '::' %nonassoc IDENT ntIdent default union catch self %nonassoc box return break continue IMPLTRAIT LAMBDA %right '=' '>>=' '<<=' '-=' '+=' '*=' '/=' '^=' '|=' '&=' '%=' %right '<-' %nonassoc SINGLERNG %nonassoc INFIXRNG %nonassoc POSTFIXRNG %nonassoc PREFIXRNG %nonassoc '..' '...' %left '||' %left '&&' %left '==' '!=' '<' '>' '<=' '>=' %left '|' %left '^' %left '&' %left '<<' '>>' %left '+' '-' %left '*' '/' '%' %nonassoc ':' as %nonassoc UNARY %nonassoc FIELD VIS PATH WHERE NOSEMI %nonassoc '?' '.' %nonassoc '{' ntBlock '[' '(' '!' ';' %% ident :: { Int } : ntIdent { 0 } | union { 1 } | default { 2 } | catch { 3 } | IDENT { 4 } gt :: { Int } : {- empty -} { 5 } some(p) :: { Int } : some(p) p { 6 } | p { 7 } many(p) :: { Int } : some(p) { 8 } | {- empty -} { 9 } sep_by1(p,sep) :: { Int } : sep_by1(p,sep) sep p { 10 } | p { 11 } sep_by(p,sep) :: { Int } : sep_by1(p,sep) { 12 } | {- empty -} { 13 } sep_by1T(p,sep) :: { Int } : sep_by1(p,sep) sep { 14 } | sep_by1(p,sep) { 15 } sep_byT(p,sep) :: { Int } : sep_by1T(p,sep) { 16 } | {- empty -} { 17 } source_file :: { Int } : inner_attrs many(mod_item) { 18 } | many(mod_item) { 19 } outer_attribute :: { Int } : '#' '[' mod_path token_stream ']' { 20 } | outerDoc { 21 } inner_attribute :: { Int } : '#' '!' '[' mod_path token_stream ']' { 22 } | '#!' '[' mod_path token_stream ']' { 23 } | innerDoc { 24 } inner_attrs :: { Int } : inner_attrs inner_attribute { 25 } | inner_attribute { 26 } lit :: { Int } : ntLit { 27 } | byte { 28 } | char { 29 } | int { 30 } | float { 31 } | true { 32 } | false { 33 } | string { 34 } string :: { Int } : str { 35 } | rawStr { 36 } | byteStr { 37 } | rawByteStr { 38 } qual_path(segs) :: { Int } : '<' qual_path_suf(segs) { 39 } | lt_ty_qual_path as ty_path '>' '::' segs { 40 } qual_path_suf(segs) :: { Int } : ty '>' '::' segs { 41 } | ty as ty_path '>' '::' segs { 42 } lt_ty_qual_path :: { Int } : '<<' qual_path_suf(path_segments_without_colons) { 43 } generic_values :: { Int } : '<' sep_by1(lifetime,',') ',' sep_by1T(ty,',') gt '>' { 45 } | '<' sep_by1(lifetime,',') ',' sep_by1T(binding,',') gt '>' { 46 } | '<' sep_by1T(lifetime,',') gt '>' { 47 } | '<' sep_by1(ty,',') ',' sep_by1T(binding,',') gt '>' { 48 } | '<' sep_by1T(ty,',') gt '>' { 49 } | '<' sep_by1T(binding,',') gt '>' { 50 } | '<' gt '>' { 51 } | lt_ty_qual_path ',' sep_by1T(ty,',') gt '>' { 53 } | lt_ty_qual_path ',' sep_by1T(binding,',') gt '>' { 54 } | lt_ty_qual_path gt '>' { 55 } binding :: { Int } : ident '=' ty { 56 } ty_path :: { Int } : ntPath { 57 } | path_segments_without_colons { 58 } | '::' path_segments_without_colons { 59 } ty_qual_path :: { Int } : qual_path(path_segments_without_colons) { 60 } path_segments_without_colons :: { Int } : sep_by1(path_segment_without_colons, '::') %prec SEG { 61 } path_segment_without_colons :: { Int } : self_or_ident path_parameter1 { 62 } path_parameter1 :: { Int } : generic_values { 63 } | '(' sep_byT(ty,',') ')' { 64 } | '(' sep_byT(ty,',') ')' '->' ty_no_plus { 65 } | {- empty -} %prec IDENT { 66 } expr_path :: { Int } : ntPath { 67 } | path_segments_with_colons { 68 } | '::' path_segments_with_colons { 69 } expr_qual_path :: { Int } : qual_path(path_segments_with_colons) { 70 } path_segments_with_colons :: { Int } : self_or_ident { 71 } | path_segments_with_colons '::' self_or_ident { 72 } | path_segments_with_colons '::' generic_values { 73 } mod_path :: { Int } : ntPath { 74 } | self_or_ident { 75 } | '::' self_or_ident { 76 } | mod_path '::' ident { 77 } lifetime :: { Int } : LIFETIME { 78 } trait_ref :: { Int } : ty_path { 79 } ty :: { Int } : ty_no_plus { 80 } | poly_trait_ref_mod_bound '+' sep_by1T(ty_param_bound_mod,'+') { 81 } ty_no_plus :: { Int } : ntTy { 82 } | no_for_ty { 83 } | for_ty_no_plus { 84 } ty_prim :: { Int } : no_for_ty_prim { 85 } | for_ty_no_plus { 86 } | poly_trait_ref_mod_bound '+' sep_by1T(ty_param_bound_mod,'+') { 87 } no_for_ty :: { Int } : no_for_ty_prim { 88 } | '(' ')' { 89 } | '(' ty ')' { 90 } | '(' ty ',' ')' { 91 } | '(' ty ',' sep_by1T(ty,',') ')' { 92 } | ty_qual_path { 93 } no_for_ty_prim :: { Int } : '_' { 94 } | '!' { 95 } | '[' ty ']' { 96 } | '*' ty_no_plus { 97 } | '*' const ty_no_plus { 98 } | '*' mut ty_no_plus { 99 } | '&' ty_no_plus { 100 } | '&' lifetime ty_no_plus { 101 } | '&' mut ty_no_plus { 102 } | '&' lifetime mut ty_no_plus { 103 } | '&&' ty_no_plus { 104 } | '&&' lifetime ty_no_plus { 105 } | '&&' mut ty_no_plus { 106 } | '&&' lifetime mut ty_no_plus { 107 } | ty_path %prec PATH { 108 } | ty_mac { 109 } | unsafe extern abi fn fn_decl(arg_general) { 110 } | unsafe fn fn_decl(arg_general) { 111 } | extern abi fn fn_decl(arg_general) { 112 } | fn fn_decl(arg_general) { 113 } | typeof '(' expr ')' { 114 } | '[' ty ';' expr ']' { 115 } | '?' trait_ref { 116 } | '?' for_lts trait_ref { 117 } for_ty_no_plus :: { Int } : for_lts unsafe extern abi fn fn_decl(arg_general) { 118 } | for_lts unsafe fn fn_decl(arg_general) { 119 } | for_lts extern abi fn fn_decl(arg_general) { 120 } | for_lts fn fn_decl(arg_general) { 121 } | for_lts trait_ref { 122 } impl_ty :: { Int } : impl sep_by1(ty_param_bound_mod,'+') %prec IMPLTRAIT { 123 } lifetime_mut :: { Int } : lifetime mut { 124 } | lifetime { 125 } | mut { 126 } | {- empty -} { 127 } fn_decl(arg) :: { Int } : '(' sep_by1(arg,',') ',' '...' ')' ret_ty { 128 } | '(' sep_byT(arg,',') ')' ret_ty { 129 } fn_decl_with_self_general :: { Int } : '(' arg_self_general ',' sep_byT(arg_general,',') ')' ret_ty { 130 } | '(' arg_self_general ')' ret_ty { 131 } | '(' ')' ret_ty { 132 } fn_decl_with_self_named :: { Int } : '(' arg_self_named ',' sep_by1(arg_named,',') ',' ')' ret_ty { 133 } | '(' arg_self_named ',' sep_by1(arg_named,',') ')' ret_ty { 134 } | '(' arg_self_named ',' ')' ret_ty { 135 } | '(' arg_self_named ')' ret_ty { 136 } | fn_decl(arg_named) { 137 } ty_param_bound :: { Int } : lifetime { 138 } | poly_trait_ref { 139 } poly_trait_ref_mod_bound :: { Int } : poly_trait_ref { 140 } | '?' poly_trait_ref { 141 } ty_param_bound_mod :: { Int } : ty_param_bound { 142 } | '?' poly_trait_ref { 143 } abi :: { Int } : str { 144 } | {- empty -} { 145 } ret_ty :: { Int } : '->' ty_no_plus { 146 } | '->' impl_ty { 147 } | {- empty -} { 148 } poly_trait_ref :: { Int } : trait_ref { 149 } | for_lts trait_ref { 150 } for_lts :: { Int } : for '<' sep_byT(lifetime_def,',') '>' { 151 } lifetime_def :: { Int } : many(outer_attribute) lifetime ':' sep_by1T(lifetime,'+') { 152 } | many(outer_attribute) lifetime { 153 } arg_named :: { Int } : ntArg { 154 } | pat ':' ty { 155 } arg_general :: { Int } : ntArg { 156 } | ty { 157 } | '_' ':' ty { 158 } | ident ':' ty { 159 } | mut ident ':' ty { 160 } | '&' '_' ':' ty { 161 } | '&' ident ':' ty { 162 } | '&&' '_' ':' ty { 163 } | '&&' ident ':' ty { 164 } arg_self_general :: { Int } : mut self { 165 } | self ':' ty { 166 } | mut self ':' ty { 167 } | arg_general { 168 } arg_self_named :: { Int } : self { 169 } | mut self { 170 } | '&' self { 171 } | '&' lifetime self { 172 } | '&' mut self { 173 } | '&' lifetime mut self { 174 } | self ':' ty { 175 } | mut self ':' ty { 176 } lambda_arg :: { Int } : ntArg { 177 } | pat ':' ty { 178 } | pat { 179 } pat :: { Int } : ntPat { 180 } | '_' { 181 } | '&' mut pat { 182 } | '&' pat { 183 } | '&&' mut pat { 184 } | '&&' pat { 185 } | lit_expr { 186 } | '-' lit_expr { 187 } | box pat { 188 } | binding_mode1 ident '@' pat { 189 } | binding_mode1 ident { 190 } | ident '@' pat { 191 } | expr_path { 192 } | expr_qual_path { 193 } | lit_or_path '...' lit_or_path { 194 } | expr_path '{' '..' '}' { 195 } | expr_path '{' pat_fields '}' { 196 } | expr_path '(' pat_tup ')' { 197 } | expr_mac { 198 } | '[' pat_slice ']' { 199 } | '(' pat_tup ')' { 200 } pat_tup :: { Int } : sep_by1(pat,',') ',' '..' ',' sep_by1(pat,',') { 201 } | sep_by1(pat,',') ',' '..' ',' sep_by1(pat,',') ',' { 202 } | sep_by1(pat,',') ',' '..' { 203 } | sep_by1(pat,',') { 204 } | sep_by1(pat,',') ',' { 205 } | '..' ',' sep_by1(pat,',') { 206 } | '..' ',' sep_by1(pat,',') ',' { 207 } | '..' { 208 } | {- empty -} { 209 } pat_slice :: { Int } : sep_by1(pat,',') ',' '..' ',' sep_by1T(pat,',') { 210 } | sep_by1(pat,',') ',' '..' { 211 } | sep_by1(pat,',') '..' ',' sep_by1T(pat,',') { 212 } | sep_by1(pat,',') '..' { 213 } | sep_by1T(pat,',') { 214 } | '..' ',' sep_by1T(pat,',') { 215 } | '..' { 216 } | {- empty -} { 217 } lit_or_path :: { Int } : expr_path { 218 } | expr_qual_path { 219 } | '-' lit_expr { 220 } | lit_expr { 221 } pat_fields :: { Int } : sep_byT(pat_field,',') { 222 } | sep_by1(pat_field,',') ',' '..' { 223 } pat_field :: { Int } : binding_mode ident { 224 } | box binding_mode ident { 225 } | binding_mode ident ':' pat { 226 } binding_mode1 :: { Int } : ref mut { 227 } | ref { 228 } | mut { 229 } binding_mode :: { Int } : binding_mode1 { 230 } | {- empty -} { 231 } gen_expression(lhs,rhs,rhs2) :: { Int } : ntExpr { 232 } | lit_expr { 233 } | '[' sep_byT(expr,',') ']' { 234 } | '[' inner_attrs sep_byT(expr,',') ']' { 235 } | '[' expr ';' expr ']' { 236 } | expr_mac { 237 } | expr_path %prec PATH { 238 } | expr_qual_path { 239 } | '*' rhs %prec UNARY { 240 } | '!' rhs %prec UNARY { 241 } | '-' rhs %prec UNARY { 242 } | '&' rhs %prec UNARY { 243 } | '&' mut rhs %prec UNARY { 244 } | '&&' rhs %prec UNARY { 245 } | '&&' mut rhs %prec UNARY { 246 } | box rhs %prec UNARY { 247 } | left_gen_expression(lhs,rhs,rhs2) { 248 } | '..' rhs2 %prec PREFIXRNG { 249 } | '...' rhs2 %prec PREFIXRNG { 250 } | '..' %prec SINGLERNG { 251 } | '...' %prec SINGLERNG { 252 } | return { 253 } | return rhs { 254 } | continue { 255 } | continue lifetime { 256 } | break { 257 } | break rhs { 258 } | break lifetime { 259 } | break lifetime rhs %prec break { 260 } | move lambda_args rhs %prec LAMBDA { 261 } | lambda_args rhs %prec LAMBDA { 262 } left_gen_expression(lhs,rhs,rhs2) :: { Int } : postfix_blockexpr(lhs) { 263 } | lhs '[' expr ']' { 264 } | lhs '(' sep_byT(expr,',') ')' { 265 } | lhs ':' ty_no_plus { 266 } | lhs as ty_no_plus { 267 } | lhs '*' rhs { 268 } | lhs '/' rhs { 269 } | lhs '%' rhs { 270 } | lhs '+' rhs { 271 } | lhs '-' rhs { 272 } | lhs '<<' rhs { 273 } | lhs '>>' rhs { 274 } | lhs '&' rhs { 275 } | lhs '^' rhs { 276 } | lhs '|' rhs { 277 } | lhs '==' rhs { 278 } | lhs '!=' rhs { 279 } | lhs '<' rhs { 280 } | lhs '>' rhs { 281 } | lhs '<=' rhs { 282 } | lhs '>=' rhs { 283 } | lhs '&&' rhs { 284 } | lhs '||' rhs { 285 } | lhs '..' %prec POSTFIXRNG { 286 } | lhs '...' %prec POSTFIXRNG { 287 } | lhs '..' rhs2 %prec INFIXRNG { 288 } | lhs '...' rhs2 %prec INFIXRNG { 289 } | lhs '<-' rhs { 290 } | lhs '=' rhs { 291 } | lhs '>>=' rhs { 292 } | lhs '<<=' rhs { 293 } | lhs '-=' rhs { 294 } | lhs '+=' rhs { 295 } | lhs '*=' rhs { 296 } | lhs '/=' rhs { 297 } | lhs '^=' rhs { 298 } | lhs '|=' rhs { 299 } | lhs '&=' rhs { 300 } | lhs '%=' rhs { 301 } postfix_blockexpr(lhs) :: { Int } : lhs '?' { 302 } | lhs '.' ident %prec FIELD { 303 } | lhs '.' ident '(' sep_byT(expr,',') ')' { 304 } | lhs '.' ident '::' '<' sep_byT(ty,',') '>' '(' sep_byT(expr,',') ')' { 305 } | lhs '.' int { 306 } expr :: { Int } : gen_expression(expr,expr,expr) { 307 } | paren_expr { 308 } | struct_expr { 309 } | block_expr { 310 } | lambda_expr_block { 311 } nostruct_expr :: { Int } : gen_expression(nostruct_expr,nostruct_expr,nonstructblock_expr) { 312 } | paren_expr { 313 } | block_expr { 314 } nonstructblock_expr :: { Int } : gen_expression(nonstructblock_expr,nostruct_expr,nonstructblock_expr) { 315 } | paren_expr { 316 } | block_like_expr { 317 } | unsafe inner_attrs_block { 318 } nonblock_expr :: { Int } : gen_expression(nonblock_expr,expr,expr) { 319 } | paren_expr { 320 } | struct_expr { 321 } | lambda_expr_block { 322 } blockpostfix_expr :: { Int } : postfix_blockexpr(block_like_expr) { 323 } | postfix_blockexpr(vis_safety_block) { 324 } | left_gen_expression(blockpostfix_expr,expr,expr) { 325 } lit_expr :: { Int } : lit { 326 } block_expr :: { Int } : block_like_expr { 327 } | inner_attrs_block { 328 } | unsafe inner_attrs_block { 329 } block_like_expr :: { Int } : if_expr { 330 } | loop inner_attrs_block { 331 } | lifetime ':' loop inner_attrs_block { 332 } | for pat in nostruct_expr inner_attrs_block { 333 } | lifetime ':' for pat in nostruct_expr inner_attrs_block { 334 } | while nostruct_expr inner_attrs_block { 335 } | lifetime ':' while nostruct_expr inner_attrs_block { 336 } | while let pat '=' nostruct_expr inner_attrs_block { 337 } | lifetime ':' while let pat '=' nostruct_expr inner_attrs_block { 338 } | match nostruct_expr '{' '}' { 339 } | match nostruct_expr '{' inner_attrs '}' { 340 } | match nostruct_expr '{' arms '}' { 341 } | match nostruct_expr '{' inner_attrs arms '}' { 342 } | expr_path '!' '{' token_stream '}' { 343 } | do catch inner_attrs_block { 344 } if_expr :: { Int } : if nostruct_expr block else_expr { 345 } | if let pat '=' nostruct_expr block else_expr { 346 } else_expr :: { Int } : else block { 347 } | else if_expr { 348 } | {- empty -} { 349 } arms :: { Int } : ntArm { 350 } | ntArm arms { 351 } | many(outer_attribute) sep_by1(pat,'|') arm_guard '=>' expr_arms { 352 } arm_guard :: { Int } : {- empty -} { 353 } | if expr { 354 } comma_arms :: { Int } : {- empty -} { 355 } | ',' { 356 } | ',' arms { 357 } expr_arms :: { Int } : nonblock_expr comma_arms { 358 } | blockpostfix_expr comma_arms { 359 } | vis_safety_block comma_arms { 360 } | vis_safety_block arms { 361 } | block_like_expr comma_arms { 362 } | block_like_expr arms { 363 } paren_expr :: { Int } : '(' ')' { 364 } | '(' inner_attrs ')' { 365 } | '(' expr ')' { 366 } | '(' inner_attrs expr ')' { 367 } | '(' expr ',' ')' { 368 } | '(' inner_attrs expr ',' ')' { 369 } | '(' expr ',' sep_by1T(expr,',') ')' { 370 } | '(' inner_attrs expr ',' sep_by1T(expr,',') ')' { 371 } lambda_expr_block :: { Int } : move lambda_args '->' ty_no_plus block { 372 } | lambda_args '->' ty_no_plus block { 373 } lambda_args :: { Int } : '||' { 374 } | '|' sep_byT(lambda_arg,',') '|' { 375 } struct_expr :: { Int } : expr_path '{' '..' expr '}' { 376 } | expr_path '{' inner_attrs '..' expr '}' { 377 } | expr_path '{' sep_by1(field,',') ',' '..' expr '}' { 378 } | expr_path '{' inner_attrs sep_by1(field,',') ',' '..' expr '}' { 379 } | expr_path '{' sep_byT(field,',') '}' { 380 } | expr_path '{' inner_attrs sep_byT(field,',') '}' { 381 } field :: { Int } : ident ':' expr { 382 } | ident { 383 } vis_safety_block :: { Int } : pub_or_inherited safety inner_attrs_block { 384 } vis_union_nonblock_expr :: { Int } : union_expr { 385 } | left_gen_expression(vis_union_nonblock_expr, expr, expr) { 386 } union_expr :: { Int } : pub_or_inherited union { 387 } stmt :: { Int } : ntStmt { 388 } | many(outer_attribute) let pat ':' ty initializer ';' { 389 } | many(outer_attribute) let pat initializer ';' { 390 } | many(outer_attribute) nonblock_expr ';' { 391 } | many(outer_attribute) block_like_expr ';' { 392 } | many(outer_attribute) blockpostfix_expr ';' { 393 } | many(outer_attribute) vis_union_nonblock_expr ';' { 394 } | many(outer_attribute) block_like_expr %prec NOSEMI { 395 } | many(outer_attribute) vis_safety_block ';' { 396 } | many(outer_attribute) vis_safety_block %prec NOSEMI { 397 } | gen_item(pub_or_inherited) { 398 } | many(outer_attribute) expr_path '!' ident '[' token_stream ']' ';' { 399 } | many(outer_attribute) expr_path '!' ident '(' token_stream ')' ';' { 400 } | many(outer_attribute) expr_path '!' ident '{' token_stream '}' { 401 } pub_or_inherited :: { Int } : pub %prec VIS { 402 } | {- empty -} %prec VIS { 403 } stmtOrSemi :: { Int } : ';' { 404 } | stmt { 405 } stmts_possibly_no_semi :: { Int } : stmtOrSemi stmts_possibly_no_semi { 406 } | stmtOrSemi { 407 } | many(outer_attribute) nonblock_expr { 408 } | many(outer_attribute) blockpostfix_expr { 409 } initializer :: { Int } : '=' expr { 410 } | {- empty -} { 411 } block :: { Int } : ntBlock { 412 } | '{' '}' { 413 } | '{' stmts_possibly_no_semi '}' { 414 } inner_attrs_block :: { Int } : block { 415 } | '{' inner_attrs '}' { 416 } | '{' inner_attrs stmts_possibly_no_semi '}' { 417 } gen_item(vis) :: { Int } : many(outer_attribute) vis static ident ':' ty '=' expr ';' { 418 } | many(outer_attribute) vis static mut ident ':' ty '=' expr ';' { 419 } | many(outer_attribute) vis const ident ':' ty '=' expr ';' { 420 } | many(outer_attribute) vis type ident generics where_clause '=' ty ';' { 421 } | many(outer_attribute) vis use view_path ';' { 422 } | many(outer_attribute) vis safety extern crate ident ';' { 423 } | many(outer_attribute) vis safety extern crate ident as ident ';' { 424 } | many(outer_attribute) vis const safety fn ident generics fn_decl(arg_named) where_clause inner_attrs_block { 425 } | many(outer_attribute) vis safety extern abi fn ident generics fn_decl(arg_named) where_clause inner_attrs_block { 426 } | many(outer_attribute) vis safety fn ident generics fn_decl(arg_named) where_clause inner_attrs_block { 427 } | many(outer_attribute) vis mod ident ';' { 428 } | many(outer_attribute) vis mod ident '{' many(mod_item) '}' { 429 } | many(outer_attribute) vis mod ident '{' inner_attrs many(mod_item) '}' { 430 } | many(outer_attribute) vis safety extern abi '{' many(foreign_item) '}' { 431 } | many(outer_attribute) vis safety extern abi '{' inner_attrs many(foreign_item) '}' { 432 } | many(outer_attribute) vis struct ident generics struct_decl_args { 433 } | many(outer_attribute) vis union ident generics struct_decl_args { 434 } | many(outer_attribute) vis enum ident generics where_clause '{' sep_byT(enum_def,',') '}' { 435 } | many(outer_attribute) vis safety trait ident generics where_clause '{' many(trait_item) '}' { 437 } | many(outer_attribute) vis safety impl generics ty_prim where_clause '{' impl_items '}' { 438 } | many(outer_attribute) vis default safety impl generics ty_prim where_clause '{' impl_items '}' { 439 } | many(outer_attribute) vis safety impl generics '(' ty_no_plus ')' where_clause '{' impl_items '}' { 440 } | many(outer_attribute) vis default safety impl generics '(' ty_no_plus ')' where_clause '{' impl_items '}' { 441 } | many(outer_attribute) vis safety impl generics '!' trait_ref for ty where_clause '{' impl_items '}' { 442 } | many(outer_attribute) vis default safety impl generics '!' trait_ref for ty where_clause '{' impl_items '}' { 443 } | many(outer_attribute) vis safety impl generics trait_ref for ty where_clause '{' impl_items '}' { 444 } | many(outer_attribute) vis default safety impl generics trait_ref for ty where_clause '{' impl_items '}' { 445 } | many(outer_attribute) vis safety impl generics trait_ref for '..' '{' '}' { 446 } mod_item :: { Int } : ntItem { 447 } | gen_item(vis) { 448 } | many(outer_attribute) expr_path '!' ident '[' token_stream ']' ';' { 449 } | many(outer_attribute) expr_path '!' '[' token_stream ']' ';' { 450 } | many(outer_attribute) expr_path '!' ident '(' token_stream ')' ';' { 451 } | many(outer_attribute) expr_path '!' '(' token_stream ')' ';' { 452 } | many(outer_attribute) expr_path '!' ident '{' token_stream '}' { 453 } | many(outer_attribute) expr_path '!' '{' token_stream '}' { 454 } foreign_item :: { Int } : many(outer_attribute) vis static ident ':' ty ';' { 455 } | many(outer_attribute) vis static mut ident ':' ty ';' { 456 } | many(outer_attribute) vis fn ident generics fn_decl(arg_named) where_clause ';' { 457 } generics :: { Int } : ntGenerics { 458 } | '<' sep_by1(lifetime_def,',') ',' sep_by1T(ty_param,',') gt '>' { 459 } | '<' sep_by1T(lifetime_def,',') gt '>' { 460 } | '<' sep_by1T(ty_param,',') gt '>' { 461 } | '<' gt '>' { 462 } | {- empty -} { 463 } ty_param :: { Int } : many(outer_attribute) ident { 464 } | many(outer_attribute) ident ':' sep_by1T(ty_param_bound_mod,'+') { 465 } | many(outer_attribute) ident '=' ty { 466 } | many(outer_attribute) ident ':' sep_by1T(ty_param_bound_mod,'+') '=' ty { 467 } struct_decl_args :: { Int } : where_clause ';' { 468 } | where_clause '{' sep_byT(struct_decl_field,',') '}' { 469 } | '(' sep_byT(tuple_decl_field,',') ')' where_clause ';' { 470 } struct_decl_field :: { Int } : many(outer_attribute) vis ident ':' ty { 471 } tuple_decl_field :: { Int } : many(outer_attribute) vis ty { 472 } enum_def :: { Int } : many(outer_attribute) ident '{' sep_byT(struct_decl_field,',') '}' { 473 } | many(outer_attribute) ident '(' sep_byT(tuple_decl_field,',') ')' { 474 } | many(outer_attribute) ident initializer { 475 } where_clause :: { Int } : {- empty -} { 476 } | ntWhereClause { 477 } | where sep_by(where_predicate,',') %prec WHERE { 478 } | where sep_by1(where_predicate,',') ',' %prec WHERE { 479 } where_predicate :: { Int } : lifetime { 480 } | lifetime ':' sep_by1T(lifetime,'+') { 481 } | no_for_ty %prec EQ { 482 } | no_for_ty '=' ty { 483 } | no_for_ty ':' sep_by1T(ty_param_bound_mod,'+') { 484 } | for_lts no_for_ty { 485 } | for_lts no_for_ty ':' sep_by1T(ty_param_bound_mod,'+') { 486 } impl_items :: { Int } : many(impl_item) { 487 } | inner_attrs many(impl_item) { 488 } impl_item :: { Int } : many(outer_attribute) vis def type ident '=' ty ';' { 489 } | many(outer_attribute) vis def const ident ':' ty '=' expr ';' { 490 } | many(outer_attribute) def mod_mac { 491 } trait_item :: { Int } : ntTraitItem { 494 } | many(outer_attribute) const ident ':' ty initializer ';' { 495 } | many(outer_attribute) mod_mac { 496 } | many(outer_attribute) type ident ';' { 497 } | many(outer_attribute) type ident '=' ty ';' { 498 } | many(outer_attribute) type ident ':' sep_by1T(ty_param_bound_mod,'+') ';' { 499 } safety :: { Int } : {- empty -} { 503 } | unsafe { 504 } ext_abi :: { Int } : {- empty -} { 505 } | extern abi { 506 } vis :: { Int } : {- empty -} %prec VIS { 507 } | pub %prec VIS { 508 } | pub '(' crate ')' { 509 } | pub '(' in mod_path ')' { 510 } | pub '(' super ')' { 511 } | pub '(' self ')' { 512 } def :: { Int } : {- empty -} %prec DEF { 513 } | default { 514 } view_path :: { Int } : '::' sep_by1(self_or_ident,'::') { 515 } | '::' sep_by1(self_or_ident,'::') as ident { 516 } | '::' '*' { 517 } | '::' sep_by1(self_or_ident,'::') '::' '*' { 518 } | '::' sep_by1(self_or_ident,'::') '::' '{' sep_byT(plist,',') '}' { 519 } | '::' '{' sep_byT(plist,',') '}' { 520 } | sep_by1(self_or_ident,'::') { 521 } | sep_by1(self_or_ident,'::') as ident { 522 } | '*' { 523 } | sep_by1(self_or_ident,'::') '::' '*' { 524 } | sep_by1(self_or_ident,'::') '::' '{' sep_byT(plist,',') '}' { 525 } | '{' sep_byT(plist,',') '}' { 526 } self_or_ident :: { Int } : ident { 527 } | self { 528 } | Self { 529 } | super { 530 } plist :: { Int } : self_or_ident { 531 } | self_or_ident as ident { 532 } expr_mac :: { Int } : expr_path '!' '[' token_stream ']' { 533 } | expr_path '!' '(' token_stream ')' { 534 } ty_mac :: { Int } : ty_path '!' '[' token_stream ']' { 535 } | ty_path '!' '{' token_stream '}' { 536 } | ty_path '!' '(' token_stream ')' { 537 } mod_mac :: { Int } : mod_path '!' '[' token_stream ']' ';' { 538 } | mod_path '!' '{' token_stream '}' { 539 } | mod_path '!' '(' token_stream ')' ';' { 540 } token_stream :: { Int } : {- empty -} { 541 } | some(token_tree) { 542 } token_tree :: { Int } : ntTT { 543 } | '(' token_stream ')' { 544 } | '{' token_stream '}' { 545 } | '[' token_stream ']' { 546 } | token { 547 } token :: { Int } : '=' { 548 } | '<' { 549 } | '>' { 550 } | '!' { 551 } | '~' { 552 } | '-' { 553 } | '/' { 554 } | '+' { 555 } | '*' { 556 } | '%' { 557 } | '^' { 558 } | '&' { 559 } | '|' { 560 } | '<<=' { 561 } | '>>=' { 562 } | '-=' { 563 } | '&=' { 564 } | '|=' { 565 } | '+=' { 566 } | '*=' { 567 } | '/=' { 568 } | '^=' { 569 } | '%=' { 571 } | '||' { 572 } | '&&' { 573 } | '==' { 574 } | '!=' { 575 } | '<=' { 576 } | '>=' { 577 } | '<<' { 578 } | '>>' { 579 } | '@' { 580 } | '...' { 581 } | '..' { 582 } | '.' { 583 } | ',' { 584 } | ';' { 585 } | '::' { 586 } | ':' { 587 } | '->' { 588 } | '<-' { 589 } | '=>' { 590 } | '#' { 591 } | '$' { 592 } | '?' { 593 } | '#!' { 594 } | byte { 595 } | char { 596 } | int { 597 } | float { 598 } | str { 599 } | byteStr { 600 } | rawStr { 601 } | rawByteStr { 602 } | as { 603 } | box { 604 } | break { 605 } | const { 606 } | continue { 607 } | crate { 608 } | else { 609 } | enum { 610 } | extern { 611 } | false { 612 } | fn { 613 } | for { 614 } | if { 615 } | impl { 616 } | in { 617 } | let { 618 } | loop { 619 } | match { 620 } | mod { 621 } | move { 622 } | mut { 623 } | pub { 624 } | ref { 625 } | return { 626 } | Self { 627 } | self { 628 } | static { 629 } | struct { 630 } | super { 631 } | trait { 632 } | true { 633 } | type { 634 } | unsafe { 635 } | use { 636 } | where { 637 } | while { 638 } | abstract { 639 } | alignof { 640 } | become { 641 } | do { 642 } | final { 643 } | macro { 644 } | offsetof { 645 } | override { 646 } | priv { 647 } | proc { 648 } | pure { 649 } | sizeof { 650 } | typeof { 651 } | unsized { 652 } | virtual { 653 } | yield { 654 } | default { 655 } | union { 656 } | catch { 657 } | outerDoc { 658 } | innerDoc { 659 } | IDENT { 660 } | '_' { 661 } | LIFETIME { 662 } export_attribute :: { Int } : inner_attribute { 663 } | outer_attribute { 664 } export_block :: { Int } : ntBlock { 665 } | safety '{' '}' { 666 } | safety '{' stmts_possibly_no_semi '}' { 667 } export_ty :: { Int } : ty { 668 } | impl_ty { 669 } { type P a = String -> Either String (a, String) bindP :: P a -> (a -> P b) -> P b bindP p f s = case p s of Left m -> Left m Right (x,s') -> f x s' returnP :: a -> P a returnP x s = Right (x,s) parseError :: Show b => b -> P a parseError b _ = Left ("Syntax error: the symbol `" ++ show b ++ "' does not fit here") data Token = Equal | Less | Greater | Ampersand | Pipe | Exclamation | Tilde | Plus | Minus | Star | Slash | Percent | Caret | GreaterEqual | GreaterGreaterEqual | AmpersandAmpersand | PipePipe | LessLess | GreaterGreater | EqualEqual | NotEqual | LessEqual | LessLessEqual | MinusEqual | AmpersandEqual | PipeEqual | PlusEqual | StarEqual | SlashEqual | CaretEqual | PercentEqual | At | Dot | DotDot | DotDotDot | Comma | Semicolon | Colon | ModSep | RArrow | LArrow | FatArrow | Pound | Dollar | Question | OpenParen | OpenBracket | OpenBrace | CloseParen | CloseBracket | CloseBrace | IdentTok String | Underscore | LifetimeTok String | Space | InnerDoc | OuterDoc | Shebang | Eof | ByteTok String | CharTok String | IntegerTok String | FloatTok String | StrTok String | StrRawTok String | ByteStrTok String | ByteStrRawTok String | Interpolated Int deriving Show -- This is an intentionally simplfied tokenizer lexNonSpace :: P Token lexNonSpace "" = Right (Eof, "") lexNonSpace ('.':cs) = Right (Dot, cs) lexNonSpace ('+':cs) = Right (Plus, cs) lexNonSpace (';':cs) = Right (Semicolon, cs) lexNonSpace (',':cs) = Right (Comma, cs) lexNonSpace ('=':cs) = Right (Equal, cs) lexNonSpace ('{':cs) = Right (OpenBrace, cs) lexNonSpace ('}':cs) = Right (CloseBrace, cs) lexNonSpace ('(':cs) = Right (OpenParen, cs) lexNonSpace (')':cs) = Right (CloseParen, cs) lexNonSpace (c:cs) | isSpace c = lexNonSpace cs | isNumber c = let (tok,cs') = span isNumber (c:cs) in Right (IntegerTok tok, cs') | isAlpha c = let (tok,cs') = span isAlphaNum (c:cs) in Right (IdentTok tok, cs') | otherwise = Left ("Unexpected character: `" ++ [c] ++ "'") main = case parseStmt "union.1 + 2;" of Right (394, "") -> pure () _ -> exitWith (ExitFailure 1) } happy-1.20.1.1/tests/issue94.y0000644000000000000000000000107407346545000014107 0ustar0000000000000000-- See for more information %name parse prod %tokentype { Token } %monad { P } { bindP } { returnP } %error { error "parse error" } %lexer { lexer } { EOF } %token IDENT { Identifier $$ } %% prod : IDENT { () } { data Token = EOF | Identifier String type P a = String -> (a, String) bindP :: P a -> (a -> P b) -> P b bindP p f s = let (x,s') = p s in f x s' returnP :: a -> P a returnP = (,) lexer :: (Token -> P a) -> P a lexer cont s = cont (case s of { "" -> EOF; _ -> Identifier s }) "" main = return () } happy-1.20.1.1/tests/issue95.y0000644000000000000000000000112607346545000014106 0ustar0000000000000000-- See for more information %name parse prod %tokentype { Token } %monad { P } { bindP } { returnP } %error { error "parse error" } %lexer { lexer } { EOF } %token IDENT { Identifier $$ } %% prod :: { () } : IDENT {%% \_ -> returnP () } { data Token = EOF | Identifier String type P a = String -> (a, String) bindP :: P a -> (a -> P b) -> P b bindP p f s = let (x,s') = p s in f x s' returnP :: a -> P a returnP = (,) lexer :: (Token -> P a) -> P a lexer cont s = cont (case s of { "" -> EOF; _ -> Identifier s }) "" main = pure () } happy-1.20.1.1/tests/monad001.y0000644000000000000000000000372607346545000014127 0ustar0000000000000000-- Testing %monad without %lexer, using the IO monad. { module Main where import System.IO import Data.Char } %name calc %tokentype { Token } %token num { TokenNum $$ } '+' { TokenPlus } '-' { TokenMinus } '*' { TokenTimes } '/' { TokenDiv } '^' { TokenExp } '\n' { TokenEOL } '(' { TokenOB } ')' { TokenCB } %left '-' '+' %left '*' %nonassoc '/' %left NEG -- negation--unary minus %right '^' -- exponentiation %monad { IO } { (>>=) } { return } %% input : {- empty string -} { () } | input line { $1 } line : '\n' { () } | exp '\n' {% hPutStr stdout (show $1) } exp : num { $1 } | exp '+' exp { $1 + $3 } | exp '-' exp { $1 - $3 } | exp '*' exp { $1 * $3 } | exp '/' exp { $1 / $3 } | '-' exp %prec NEG { -$2 } -- | exp '^' exp { $1 ^ $3 } | '(' exp ')' { $2 } { main = do calc (lexer "1 + 2 * 3 / 4\n") {- -- check that non-associative operators can't be used together r <- try (calc (lexer "1 / 2 / 3")) case r of Left e -> return () Right _ -> ioError (userError "fail!") -} data Token = TokenExp | TokenEOL | TokenNum Double | TokenPlus | TokenMinus | TokenTimes | TokenDiv | TokenOB | TokenCB -- and a simple lexer that returns this datastructure. lexer :: String -> [Token] lexer [] = [] lexer ('\n':cs) = TokenEOL : lexer cs lexer (c:cs) | isSpace c = lexer cs | isDigit c = lexNum (c:cs) lexer ('+':cs) = TokenPlus : lexer cs lexer ('-':cs) = TokenMinus : lexer cs lexer ('*':cs) = TokenTimes : lexer cs lexer ('/':cs) = TokenDiv : lexer cs lexer ('^':cs) = TokenExp : lexer cs lexer ('(':cs) = TokenOB : lexer cs lexer (')':cs) = TokenCB : lexer cs lexNum cs = TokenNum (read num) : lexer rest where (num,rest) = span isNum cs isNum c = isDigit c || c == '.' happyError tokens = ioError (userError "parse error") } happy-1.20.1.1/tests/monad002.ly0000644000000000000000000001041507346545000014275 0ustar0000000000000000----------------------------------------------------------------------------- Test for monadic Happy Parsers, Simon Marlow 1996. > { > {-# OPTIONS_GHC -fglasgow-exts #-} > -- -fglasgow-exts required because P is a type synonym, and Happy uses it > -- unsaturated. > import Data.Char > } > %name calc > %tokentype { Token } > %monad { P } { thenP } { returnP } > %lexer { lexer } { TokenEOF } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp {% \s l -> ParseOk (Let l $2 $4 $6) } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves three purposes: * it passes the input string around * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P a = String -> Int -> ParseResult a > thenP :: P a -> (a -> P b) -> P b > m `thenP` k = \s l -> > case m s l of > ParseFail s -> ParseFail s > ParseOk a -> k a s l > returnP :: a -> P a > returnP a = \s l -> ParseOk a ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let Int String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > -- lexer :: (Token -> Parse) -> Parse > lexer cont s = case s of > [] -> cont TokenEOF [] > ('\n':cs) -> \line -> lexer cont cs (line+1) > (c:cs) > | isSpace c -> lexer cont cs > | isAlpha c -> lexVar (c:cs) > | isDigit c -> lexNum (c:cs) > ('=':cs) -> cont TokenEq cs > ('+':cs) -> cont TokenPlus cs > ('-':cs) -> cont TokenMinus cs > ('*':cs) -> cont TokenTimes cs > ('/':cs) -> cont TokenDiv cs > ('(':cs) -> cont TokenOB cs > (')':cs) -> cont TokenCB cs > where > lexNum cs = cont (TokenInt (read num)) rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> cont TokenLet rest > ("in",rest) -> cont TokenIn rest > (var,rest) -> cont (TokenVar var) rest > runCalc :: String -> Exp > runCalc s = case calc s 1 of > ParseOk e -> e > ParseFail s -> error s ----------------------------------------------------------------------------- The following functions should be defined for all parsers. This is the overall type of the parser. > type Parse = P Exp > calc :: Parse The next function is called when a parse error is detected. It has the same type as the top-level parse function. > -- happyError :: Parse > happyError = \s i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let 1 "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-1.20.1.1/tests/monaderror-explist.y0000644000000000000000000000357007346545000016443 0ustar0000000000000000{ {-# LANGUAGE FunctionalDependencies, FlexibleInstances #-} module Main where import Data.Char import Control.Monad (when) import System.Exit import System.Environment (getProgName) import Data.List (isPrefixOf) } %name parseFoo %tokentype { Token } %errorhandlertype explist %error { handleErrorExpList } %monad { ParseM } { (>>=) } { return } %token 'S' { TokenSucc } 'Z' { TokenZero } 'T' { TokenTest } %% Exp : 'Z' { 0 } | 'T' 'Z' Exp { $3 + 1 } | 'S' Exp { $2 + 1 } { type ParseM a = Either ParseError a data ParseError = ParseError (Maybe (Token, [String])) | StringError String deriving (Eq,Show) instance Error ParseError where strMsg = StringError data Token = TokenSucc | TokenZero | TokenTest deriving (Eq,Show) handleErrorExpList :: ([Token], [String]) -> ParseM a handleErrorExpList ([], _) = throwError $ ParseError Nothing handleErrorExpList (ts, explist) = throwError $ ParseError $ Just $ (head ts, explist) lexer :: String -> [Token] lexer [] = [] lexer (c:cs) | isSpace c = lexer cs | c == 'S' = TokenSucc:(lexer cs) | c == 'Z' = TokenZero:(lexer cs) | c == 'T' = TokenTest:(lexer cs) | otherwise = error "lexer error" main :: IO () main = do test "Z Z" $ Left (ParseError (Just (TokenZero,[]))) test "T S" $ Left (ParseError (Just (TokenSucc,["'Z'"]))) where test inp exp = do putStrLn $ "testing " ++ inp let tokens = lexer inp when (parseFoo tokens /= exp) $ do print (parseFoo tokens) exitWith (ExitFailure 1) --- class Error a where noMsg :: a noMsg = strMsg "" strMsg :: String -> a class Monad m => MonadError e m | m -> e where throwError :: e -> m a instance MonadError e (Either e) where throwError = Left } happy-1.20.1.1/tests/monaderror.y0000644000000000000000000000263507346545000014756 0ustar0000000000000000{ {-# LANGUAGE FunctionalDependencies, FlexibleInstances #-} module Main where import Control.Monad (when) import Data.Char import System.Exit } %name parseFoo %tokentype { Token } %error { handleError } %monad { ParseM } { (>>=) } { return } %token 'S' { TokenSucc } 'Z' { TokenZero } %% Exp : 'Z' { 0 } | 'S' Exp { $2 + 1 } { type ParseM a = Either ParseError a data ParseError = ParseError (Maybe Token) | StringError String deriving (Eq,Show) instance Error ParseError where strMsg = StringError data Token = TokenSucc | TokenZero deriving (Eq,Show) handleError :: [Token] -> ParseM a handleError [] = throwError $ ParseError Nothing handleError ts = throwError $ ParseError $ Just $ head ts lexer :: String -> [Token] lexer [] = [] lexer (c:cs) | isSpace c = lexer cs | c == 'S' = TokenSucc:(lexer cs) | c == 'Z' = TokenZero:(lexer cs) | otherwise = error "lexer error" main :: IO () main = do let tokens = lexer "S S" when (parseFoo tokens /= Left (ParseError Nothing)) $ do print (parseFoo tokens) exitWith (ExitFailure 1) --- class Error a where noMsg :: a noMsg = strMsg "" strMsg :: String -> a class Monad m => MonadError e m | m -> e where throwError :: e -> m a instance MonadError e (Either e) where throwError = Left } happy-1.20.1.1/tests/precedence001.ly0000644000000000000000000000307707346545000015301 0ustar0000000000000000This module demonstrates a Happy bug (in version <= 1.10). > { > module Main where > import System.IO > import Control.Exception as Exception > } > > %name parse > > %tokentype { Tok } > %token > '+' { Plus } > '-' { Minus } > int { Num $$ } > > %nonassoc '+' '-' > > %% Ambiguos grammar. > E : E '+' E { Plus' $1 $3 } > | E '-' E { Minus' $1 $3 } > | int { Num' $1 } > { > happyError :: [Tok] -> a > happyError s = error (concatMap show s) > > data Tok = Plus | Minus | Num Int deriving Show > > data Syn = Plus' Syn Syn | Minus' Syn Syn | Num' Int deriving Show All the examples below should fail. None of them does so under Happy v1.8, and only the first one under Happy v1.9 and v1.10. > test1 = parse tokens1 > test2 = parse tokens2 > test3 = parse tokens3 > > tokens1 = [Num 6, Plus, Num 7, Plus, Num 8] > tokens2 = [Num 6, Plus, Num 7, Minus, Num 8] > tokens3 = [Num 6, Minus, Num 7, Minus, Num 8] The generated info files seem correct, so there is probably something wrong with the table generation. These errors only show up when one uses Happy with the -a flag (and only that flag). I know that it's no point in using just that flag, but I happened to be doing so while trying the code out with Hugs. (Hugs didn't like the code generated with GHC extensions, -gac.) > main = do > Exception.try (print test1 >> fail "Test failed.") :: IO (Either ErrorCall ()) > Exception.try (print test2 >> fail "Test failed.") :: IO (Either ErrorCall ()) > Exception.try (print test3 >> fail "Test failed.") :: IO (Either ErrorCall ()) > } happy-1.20.1.1/tests/precedence002.y0000644000000000000000000000132107346545000015114 0ustar0000000000000000-- This module demonstrates a bug in the original 1.11 release of Happy. { module Main where import System.IO import Control.Exception as Exception } %name parse %tokentype { Tok } %token '+' { Plus } '/' { Divide } int { Num $$ } %left '+' %left '*' %nonassoc '/' %% E : E '+' E { Plus' $1 $3 } | E '/' E { Divide' $1 $3 } | int { Num' $1 } { happyError :: [Tok] -> a happyError s = error (concatMap show s) data Tok = Plus | Divide | Num Int deriving Show data Syn = Plus' Syn Syn | Divide' Syn Syn | Num' Int deriving Show -- due to a bug in conflict resolution, this caused a parse error: tokens1 = [Num 6, Divide, Num 7, Plus, Num 8] main = print (parse tokens1) } happy-1.20.1.1/tests/rank2.y0000644000000000000000000000076007346545000013620 0ustar0000000000000000{ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE GADTs #-} module Main where import System.IO import Data.Char } %name calc %tokentype { Token } %token tok { Token } %monad { IO } { (>>=) } { return } %% ib :: { (Int, Double, Bool) } : f n { ($1 $2, $1 $2, $1 True) } f :: { forall a. a -> a } : { id } n :: { forall a. Num a => a } : { 5 } { main = calc [] >>= print data Token = Token lexer :: String -> [Token] lexer _ = [] happyError tokens = ioError (userError "parse error") } happy-1.20.1.1/tests/shift01.y0000644000000000000000000000072707346545000014064 0ustar0000000000000000-- Testing the %shift directive { module Main where import System.IO import Data.Char } %expect 0 -- We must resolve the conflicts with %shift %name group_a %tokentype { Token } %token 'A' { A } %% exp : exp 'A' %shift { $1 ++ ",A" } | exp 'A' 'A' { $1 ++ ",2A" } | { "S" } { main = if group_a [A, A, A] == "S,2A,A" then return () else error "bad parse" data Token = A happyError _ = error "parse error" } happy-1.20.1.1/tests/test_rules.y0000644000000000000000000000300207346545000014764 0ustar0000000000000000{ import Control.Monad(when) import System.Exit } %monad { Maybe } { (>>=) } { return } %tokentype { Char } %token 'a' { 'a' } 'b' { 'b' } %name test1 test1 %name test2 test2 %% test1 : sepBy('a','b') { $1 } test2 : endBy('a','b') { $1 } many_rev1(p) : p { [$1] } | many_rev1(p) p { $2 : $1 } many1(p) : many_rev1(p) { reverse $1 } many(p) : many1(p) { $1 } | { [] } optional(p) : p { Just $1 } | { Nothing } sepR(p,q) : p q { $2 } sepL(p,q) : p q { $1 } sepBy1(p,q) : p many(sepR(q,p)) { $1 : $2 } sepBy(p,q) : sepBy1(p,q) { $1 } | { [] } endBy(p,q) : many (sepL(p,q)) { $1 } endBy1(p,q) : many1 (sepL(p,q)) { $1 } { happyError _ = Nothing tests = [ test1 "" == Just "" , test1 "a" == Just "a" , test1 "ab" == Nothing , test1 "aba" == Just "aa" , test1 "abab" == Nothing , test2 "" == Just "" , test2 "a" == Nothing , test2 "ab" == Just "a" , test2 "aba" == Nothing , test2 "abab" == Just "aa" ] main = do let failed = filter (not . snd) (zip [0..] tests) when (not (null failed)) $ do putStrLn ("Failed tests: " ++ show (map fst failed)) exitFailure putStrLn "Tests passed." } happy-1.20.1.1/tests/typeclass_monad001.y0000644000000000000000000000414407346545000016211 0ustar0000000000000000-- Testing %monad without %lexer, using the IO monad. { module Main where import System.IO import Data.Char } %name calc %tokentype { Token } %token num { TokenNum $$ } '+' { TokenPlus } '-' { TokenMinus } '*' { TokenTimes } '/' { TokenDiv } '^' { TokenExp } '\n' { TokenEOL } '(' { TokenOB } ')' { TokenCB } %left '-' '+' %left '*' %nonassoc '/' %left NEG -- negation--unary minus %right '^' -- exponentiation %monad { (MonadIO m) } { m } { (>>=) } { return } %% input : {- empty string -} { () } | input line { $1 } line : '\n' { () } | exp '\n' {% hPutStr stdout (show $1) } exp : num { $1 } | exp '+' exp { $1 + $3 } | exp '-' exp { $1 - $3 } | exp '*' exp { $1 * $3 } | exp '/' exp { $1 / $3 } | '-' exp %prec NEG { -$2 } -- | exp '^' exp { $1 ^ $3 } | '(' exp ')' { $2 } { main = do calc (lexer "1 + 2 * 3 / 4\n") {- -- check that non-associative operators can't be used together r <- try (calc (lexer "1 / 2 / 3")) case r of Left e -> return () Right _ -> ioError (userError "fail!") -} data Token = TokenExp | TokenEOL | TokenNum Double | TokenPlus | TokenMinus | TokenTimes | TokenDiv | TokenOB | TokenCB -- and a simple lexer that returns this datastructure. lexer :: String -> [Token] lexer [] = [] lexer ('\n':cs) = TokenEOL : lexer cs lexer (c:cs) | isSpace c = lexer cs | isDigit c = lexNum (c:cs) lexer ('+':cs) = TokenPlus : lexer cs lexer ('-':cs) = TokenMinus : lexer cs lexer ('*':cs) = TokenTimes : lexer cs lexer ('/':cs) = TokenDiv : lexer cs lexer ('^':cs) = TokenExp : lexer cs lexer ('(':cs) = TokenOB : lexer cs lexer (')':cs) = TokenCB : lexer cs lexNum cs = TokenNum (read num) : lexer rest where (num,rest) = span isNum cs isNum c = isDigit c || c == '.' happyError tokens = liftIO (ioError (userError "parse error")) -- vendored in parts of mtl class Monad m => MonadIO m where liftIO :: IO a -> m a instance MonadIO IO where liftIO = id } happy-1.20.1.1/tests/typeclass_monad002.ly0000644000000000000000000001172207346545000016366 0ustar0000000000000000----------------------------------------------------------------------------- Test for monadic Happy Parsers, Simon Marlow 1996. > { > {-# OPTIONS_GHC -fglasgow-exts #-} > -- -fglasgow-exts required because P is a type synonym, and Happy uses it > -- unsaturated. > import Data.Char > } > %name calc > %tokentype { Token } > %monad { (Monad m) } { P m } { thenP } { returnP } > %lexer { lexer } { TokenEOF } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp {% \s l -> return (ParseOk (Let l $2 $4 $6)) } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves three purposes: * it passes the input string around * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P m a = String -> Int -> m (ParseResult a) > thenP :: Monad m => P m a -> (a -> P m b) -> P m b > m `thenP` k = \s l -> > do > res <- m s l > case res of > ParseFail s -> return (ParseFail s) > ParseOk a -> k a s l > returnP :: Monad m => a -> P m a > returnP a = \s l -> return (ParseOk a) ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let Int String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > lexer :: Monad m => (Token -> P m a) -> P m a > lexer cont s = case s of > [] -> cont TokenEOF [] > ('\n':cs) -> \line -> lexer cont cs (line+1) > (c:cs) > | isSpace c -> lexer cont cs > | isAlpha c -> lexVar (c:cs) > | isDigit c -> lexNum (c:cs) > ('=':cs) -> cont TokenEq cs > ('+':cs) -> cont TokenPlus cs > ('-':cs) -> cont TokenMinus cs > ('*':cs) -> cont TokenTimes cs > ('/':cs) -> cont TokenDiv cs > ('(':cs) -> cont TokenOB cs > (')':cs) -> cont TokenCB cs > where > lexNum cs = cont (TokenInt (read num)) rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> cont TokenLet rest > ("in",rest) -> cont TokenIn rest > (var,rest) -> cont (TokenVar var) rest > runCalc :: Monad m => String -> m Exp > runCalc s = > do > res <- calc s 1 > case res of > ParseOk e -> return e > ParseFail s -> error s ----------------------------------------------------------------------------- The following functions should be defined for all parsers. This is the overall type of the parser. > type Parse m = P m Exp > calc :: Monad m => Parse m The next function is called when a parse error is detected. It has the same type as the top-level parse function. > happyError :: P m a > happyError = \s i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = > do > res <- runCalc "1 + 2 + 3" > case res of > (Exp1 (Plus (Plus (Term (Factor (Int 1))) > (Factor (Int 2))) (Factor (Int 3)))) -> > do > res <- runCalc "1 * 2 + 3" > case res of > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) > (Factor (Int 3)))) -> > do > res <- runCalc "1 + 2 * 3" > case res of > (Exp1 (Plus (Term (Factor (Int 1))) > (Times (Factor (Int 2)) (Int 3)))) -> > do > res <- runCalc "let x = 2 in x * (x - 2)" > case res of > (Let 1 "x" (Exp1 (Term (Factor (Int 2)))) > (Exp1 (Term (Times (Factor (Var "x")) > (Brack (Exp1 (Minus (Term (Factor (Var "x"))) > (Factor (Int 2))))))))) -> > print "Test works\n" > _ -> quit > _ -> quit > _ -> quit > _ -> quit > quit = print "Test failed\n" > } happy-1.20.1.1/tests/typeclass_monad_lexer.y0000644000000000000000000000764307346545000017176 0ustar0000000000000000{ {-# LANGUAGE FunctionalDependencies, FlexibleInstances, UndecidableInstances #-} import Control.Monad (liftM, ap) import Control.Applicative as A } %name parse exp %tokentype { Token } %error { parseError } %monad { (MonadIO m) } { Parser m } %lexer { lexer } { EOF } %token ID { Id _ } NUM { Num _ } PLUS { Plus } MINUS { Minus } TIMES { Times } LPAREN { LParen } RPAREN { RParen } %% exp :: { AST } : exp PLUS prod { Sum $1 $3 } | prod { $1 } prod :: { AST } : prod TIMES neg { Prod $1 $3 } | neg { $1 } neg :: { AST } : MINUS neg { Neg $2 } | atom { $1 } atom :: { AST } : ID { let Id str = $1 in Var str } | NUM { let Num n = $1 in Lit n } | LPAREN exp RPAREN { $2 } { data Token = Plus | Minus | Times | LParen | RParen | Id String | Num Int | EOF deriving (Eq, Ord, Show) data AST = Sum AST AST | Prod AST AST | Neg AST | Var String | Lit Int deriving (Eq, Ord) type Parser m = ExceptT () (Lexer m) type Lexer m = StateT [Token] m parseError :: MonadIO m => Token -> Parser m a parseError tok = do liftIO (putStrLn ("Parse error at " ++ show tok)) throwError () lexer :: MonadIO m => (Token -> Parser m a) -> Parser m a lexer cont = do toks <- get case toks of [] -> cont EOF first : rest -> do put rest cont first parse :: (MonadIO m) => Parser m AST parser :: (MonadIO m) => [Token] -> m (Maybe AST) parser input = let run :: (MonadIO m) => Lexer m (Maybe AST) run = do res <- runExceptT parse case res of Left () -> return Nothing Right ast -> return (Just ast) in do (out, _) <- runStateT run input return out main :: IO () main = let input = [Id "x", Plus, Minus, Num 1, Times, LParen, Num 2, Plus, Id "y", RParen] expected = Sum (Var "x") (Prod (Neg (Lit 1)) (Sum (Lit 2) (Var "y"))) in do res <- parser input case res of Nothing -> print "Test failed\n" Just actual | expected == actual -> print "Test works\n" | otherwise -> print "Test failed\n" -- vendored in parts of mtl class Monad m => MonadIO m where liftIO :: IO a -> m a instance MonadIO IO where liftIO = id class Monad m => MonadState s m | m -> s where put :: s -> m () get :: m s newtype StateT s m a = StateT { runStateT :: s -> m (a, s) } instance Monad m => Functor (StateT s m) where fmap = liftM instance Monad m => A.Applicative (StateT s m) where pure = return (<*>) = ap instance Monad m => Monad (StateT s m) where return x = StateT $ \s -> return (x, s) m >>= k = StateT $ \s0 -> do (x, s1) <- runStateT m s0 runStateT (k x) s1 instance Monad m => MonadState s (StateT s m) where put s = StateT $ \_ -> return ((), s) get = StateT $ \s -> return (s, s) instance MonadIO m => MonadIO (StateT e m) where liftIO m = StateT $ \s -> liftM (\x -> (x, s)) (liftIO m) class Monad m => MonadError e m | m -> e where throwError :: e -> m a newtype ExceptT e m a = ExceptT { runExceptT :: m (Either e a) } instance Monad m => Functor (ExceptT e m) where fmap = liftM instance Monad m => A.Applicative (ExceptT e m) where pure = return (<*>) = ap instance Monad m => Monad (ExceptT e m) where return = ExceptT . return . Right m >>= k = ExceptT $ do x <- runExceptT m case x of Left e -> return (Left e) Right y -> runExceptT (k y) instance MonadState s m => MonadState s (ExceptT e m) where put s = ExceptT (liftM Right (put s)) get = ExceptT (liftM Right get) instance MonadIO m => MonadIO (ExceptT e m) where liftIO = ExceptT . liftM Right . liftIO instance Monad m => MonadError e (ExceptT e m) where throwError = ExceptT . return . Left }