antlr-3.2/0000755000175000017500000000000011410174110012422 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/0000755000175000017500000000000011410174107016233 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/0000755000175000017500000000000011410174107017022 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/site/0000755000175000017500000000000011410174107017766 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/site/site.xml0000644000175000017500000000156111256465172021474 0ustar twernertwerner antlr-3.2/antlr3-maven-plugin/src/site/apt/0000755000175000017500000000000011410174107020552 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/site/apt/examples/0000755000175000017500000000000011410174107022370 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/site/apt/examples/simple.apt0000644000175000017500000000233111256465172024403 0ustar twernertwernerSimple configuration If your grammar files are organized into the default locations as described in the {{{../index.html}introduction}}, then configuring the pom.xml file for your project is as simple as adding this to it +-- org.antlr antlr3-maven-plugin 3.1.3-1 antlr ... +-- When the mvn command is executed all grammar files under <<>>, except any import grammars under <<>> will be analyzed and converted to java source code in the output directory <<>>. Your input files under <<>> should be stored in sub directories that reflect the package structure of your java parsers. If your grammar file parser.g contains: +--- @header { package org.jimi.themuss; } +--- Then the .g file should be stored in: <<>>. THis way the generated .java files will correctly reflect the package structure in which they will finally rest as classes. antlr-3.2/antlr3-maven-plugin/src/site/apt/examples/import.apt0000644000175000017500000000063411256465172024430 0ustar twernertwernerImported Grammar Files In order to have the ANTLR plugin automatically locate and use grammars used as imports in your main .g files, you need to place the imported grammar files in the imports directory beneath the root directory of your grammar files (which is <<>> by default of course). For a default layout, place your import grammars in the directory: <<>> antlr-3.2/antlr3-maven-plugin/src/site/apt/examples/libraries.apt0000644000175000017500000000362411256465172025074 0ustar twernertwernerLibraries The introduction of the import directive in a grammar allows reuse of common grammar files as well as the ability to divide up functional components of large grammars. However it has caused some confusion in regard to the fact that generated vocab files (<<>>) can also be searched for with the <<<>>> directive. This has confused two separate functions and imposes a structure upon the layout of your grammar files in certain cases. If you have grammars that both use the import directive and also require the use of a vocab file then you will need to locate the grammar that generates the .tokens file alongside the grammar that uses it. This is because you will need to use the <<<>>> directive to specify the location of your imported grammars and ANTLR will not find any vocab files in this directory. The .tokens files for any grammars are generated within the same output directory structure as the .java files. So, whereever the .java files are generated, you will also find the .tokens files. ANTLR looks for .tokens files in both the <<<>>> and the output directory where it is placing the geenrated .java files. Hence when you locate the grammars that generate .tokens files in the same source directory as the ones that use the .tokens files, then the Maven plugin will find the expected .tokens files. The <<<>>> is specified like any other directory parameter in Maven. Here is an example: +-- org.antlr antlr3-maven-plugin 3.1.3-1 antlr src/main/antlr_imports +-- antlr-3.2/antlr3-maven-plugin/src/site/apt/usage.apt.vm0000644000175000017500000001517211256465172023030 0ustar twernertwernerUsage The Maven plugin for antlr is simple to use but is at its simplest when you use the default layouts for your grammars, as so: +-- src/main/ | +--- antlr3/... .g files organized in the required package structure | +--- imports/ .g files that are imported by other grammars. +-- However, if you are not able to use this structure for whatever reason, you can configure the locations of the grammar files, where library/import files are located and where the output files should be generated. * Plugin Descriptor The current version of the plugin is shown at the top of this page after the <> date. The full layout of the descriptor (at least, those parts that are not standard Maven things), showing the default values of the configuration options, is as follows: +-- org.antlr antlr3-maven-plugin 3.1.3-1 antlr 10000 false false false src/main/antlr3/imports antlr target/generated-sources/antlr3 false false false src/main/antlr3 false true +-- Note that you can create multiple executions, and thus build some grammars with different options to others (such as setting the debug option for instance). ** Configuration parameters *** report If set to true, then after the tool has processed an input grammar file it will report variaous statistics about the parser, such as information on cyclic DFAs, which rules may use backtracking, and so on. default-value="false" *** printGrammar If set to true, then the ANTLR tool will print a version of the input grammar which is devoid of any actions that may be present in the input file. default-value = "false" *** debug If set to true, then the code generated by the ANTLR code generator will be set to debug mode. This means that when run, the code will 'hang' and wait for a debug connection on a TCP port (49100 by default). default-value="false" *** profile If set to true, then then the generated parser will compute and report on profile information at runtime. default-value="false" *** nfa If set to true then the ANTLR tool will generate a description of the nfa for each rule in Dot format default-value="false" protected boolean nfa; *** dfa If set to true then the ANTLR tool will generate a description of the DFA for each decision in the grammar in Dot format default-value="false" *** trace If set to true, the generated parser code will log rule entry and exit points to stdout as an aid to debugging. default-value="false" *** messageFormat If this parameter is set, it indicates that any warning or error messages returned by ANLTR, shoould be formatted in the specified way. Currently, ANTLR supports the built-in formats of antlr, gnu and vs2005. default-value="antlr" *** verbose If this parameter is set to true, then ANTLR will report all sorts of things about what it is doing such as the names of files and the version of ANTLR and so on. default-value="true" *** conversionTimeout The number of milliseconds ANTLR will wait for analysis of each alternative in the grammar to complete before giving up. You may raise this value if ANTLR gives up on a complicated alt and tells you that there are lots of ambiguties, but you know that it just needed to spend more time on it. Note that this is an absolute time and not CPU time. default-value="10000" *** includes Provides an explicit list of all the grammars that should be included in the generate phase of the plugin. Note that the plugin is smart enough to realize that imported grammars should be included but not acted upon directly by the ANTLR Tool. Unless otherwise specified, the include list scans for and includes all files that end in ".g" in any directory beneath src/main/antlr3. Note that this version of the plugin looks for the directory antlr3 and not the directory antlr, so as to avoid clashes and confusion for projects that use both v2 and v3 grammars such as ANTLR itself. *** excludes Provides an explicit list of any grammars that should be excluded from the generate phase of the plugin. Files listed here will not be sent for processing by the ANTLR tool. *** sourceDirectory Specifies the Antlr directory containing grammar files. For antlr version 3.x we default this to a directory in the tree called antlr3 because the antlr directory is occupied by version 2.x grammars. <> Take careful note that the default location for antlr grammars is now <> and NOT <> default-value="<<<${basedir}/src/main/antlr3>>>" *** outputDirectory Location for generated Java files. For antlr version 3.x we default this to a directory in the tree called antlr3 because the antlr directory is occupied by version 2.x grammars. default-value="<<<${project.build.directory}/generated-sources/antlr3>>>" *** libDirectory Location for imported token files, e.g. .tokens and imported grammars. Note that ANTLR will not try to process grammars that it finds in this directory, but will include this directory in the search for .tokens files and import grammars. <> If you change the lib directory from the default but the directory is still under<<<${basedir}/src/main/antlr3>>>, then you will need to exclude the grammars from processing specifically, using the <<<>>> option. default-value="<<<${basedir}/src/main/antlr3/imports>>>" antlr-3.2/antlr3-maven-plugin/src/site/apt/index.apt0000644000175000017500000000547611256465172022420 0ustar twernertwerner ------------- ANTLR v3 Maven Plugin ------------- Jim Idle ------------- March 2009 ------------- ANTLR v3 Maven plugin The ANTLR v3 Maven plugin is completely re-written as of version 3.1.3; if you are familiar with prior versions, you should note that there are some behavioral differences that make it worthwhile reading this documentation. The job of the plugin is essentially to tell the standard ANTLR parser generator where the input grammar files are and where the output files should be generated. As with all Maven plugins, there are defaults, which you are advised to comply to, but are not forced to comply to. This version of the plugin allows full control over ANTLR and allows configuration of all options that are useful for a build system. The code required to calculate dependencies, check the build order, and otherwise work with your grammar files is built into the ANTLR tool as of version 3.1.3 of ANTLR and this plugin. * Plugin Versioning The plugin version tracks the version of the ANTLR tool that it controls. Hence if you use version 3.1.3 of the plugin, you will build your grammars using version 3.1.3 of the ANTLR tool, version 3.2 of the plugin will use version 3.2 of the ANTLR tool and so on. You may also find that there are patch versions of the plugin suchas 3.1.3-1 3.1.3-2 and so on. Use the latest patch release of the plugin. The current version of the plugin is shown at the top of this page after the <> date. * Default directories As with all Maven plugins, this plugin will automatically default to standard locations for your grammar and import files. Organizing your source code to reflect this standard layout will greatly reduce the configuration effort required. The standard layout lookd like this: +-- src/main/ | +--- antlr3/... .g files organized in the required package structure | +--- imports/ .g files that are imported by other grammars. +-- If your grammar is intended to be part of a package called org.foo.bar then you would place it in the directory <<>>. The plugin will then produce .java and .tokens files in the output directory <<>> When the Java files are compiled they will be in the correct location for the javac compiler without any special configuration. The generated java files are automatically submitted for compilation by the plugin. The <<>> directory is treated in a special way. It should contain any grammar files that are imported by other grammar files (do not make subdirectories here.) Such files are never built on their own, but the plugin will automatically tell the ANTLR tool to look in this directory for library files. antlr-3.2/antlr3-maven-plugin/src/main/0000755000175000017500000000000011410174107017746 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/main/java/0000755000175000017500000000000011410174107020667 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/main/java/org/0000755000175000017500000000000011410174107021456 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/main/java/org/antlr/0000755000175000017500000000000011410174107022576 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/main/java/org/antlr/mojo/0000755000175000017500000000000011410174107023542 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/main/java/org/antlr/mojo/antlr3/0000755000175000017500000000000011410174107024745 5ustar twernertwernerantlr-3.2/antlr3-maven-plugin/src/main/java/org/antlr/mojo/antlr3/Antlr3ErrorLog.java0000644000175000017500000000571311256465172030452 0ustar twernertwerner/** [The "BSD licence"] ANTLR - Copyright (c) 2005-2008 Terence Parr Maven Plugin - Copyright (c) 2009 Jim Idle All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.antlr.mojo.antlr3; import org.antlr.tool.ANTLRErrorListener; import org.antlr.tool.Message; import org.antlr.tool.ToolMessage; import org.apache.maven.plugin.logging.Log; /** * The Maven plexus container gives us a Log logging provider * which we can use to install an error listener for the ANTLR * tool to report errors by. */ public class Antlr3ErrorLog implements ANTLRErrorListener { private Log log; /** * Instantiate an ANTLR ErrorListner that communicates any messages * it receives to the Maven error sink. * * @param log The Maven Error Log */ public Antlr3ErrorLog(Log log) { this.log = log; } /** * Sends an informational message to the Maven log sink. * @param s The message to send to Maven */ public void info(String message) { log.info(message); } /** * Sends an error message from ANTLR analysis to the Maven Log sink. * * @param message The message to send to Maven. */ public void error(Message message) { log.error(message.toString()); } /** * Sends a warning message to the Maven log sink. * * @param message */ public void warning(Message message) { log.warn(message.toString()); } /** * Sends an error message from the ANTLR tool to the Maven Log sink. * @param toolMessage */ public void error(ToolMessage toolMessage) { log.error(toolMessage.toString()); } } antlr-3.2/antlr3-maven-plugin/src/main/java/org/antlr/mojo/antlr3/Antlr3Mojo.java0000644000175000017500000004416011256465172027622 0ustar twernertwerner/** [The "BSD licence"] ANTLR - Copyright (c) 2005-2008 Terence Parr Maven Plugin - Copyright (c) 2009 Jim Idle All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* ======================================================================== * This is the definitive ANTLR3 Mojo set. All other sets are belong to us. */ package org.antlr.mojo.antlr3; import antlr.RecognitionException; import antlr.TokenStreamException; import org.apache.maven.plugin.AbstractMojo; import org.apache.maven.plugin.MojoExecutionException; import org.apache.maven.plugin.MojoFailureException; import org.apache.maven.project.MavenProject; import java.io.File; import java.io.IOException; import java.util.Collections; import java.util.HashSet; import java.util.Set; import org.antlr.Tool; import org.apache.maven.plugin.logging.Log; import org.codehaus.plexus.compiler.util.scan.InclusionScanException; import org.codehaus.plexus.compiler.util.scan.SimpleSourceInclusionScanner; import org.codehaus.plexus.compiler.util.scan.SourceInclusionScanner; import org.codehaus.plexus.compiler.util.scan.mapping.SourceMapping; import org.codehaus.plexus.compiler.util.scan.mapping.SuffixMapping; /** * Goal that picks up all the ANTLR grammars in a project and moves those that * are required for generation of the compilable sources into the location * that we use to compile them, such as target/generated-sources/antlr3 ... * * @goal antlr * * @phase process-sources * @requiresDependencyResolution compile * @requiresProject true * * @author Jim Idle */ public class Antlr3Mojo extends AbstractMojo { // First, let's deal with the options that the ANTLR tool itself // can be configured by. // /** * If set to true, then after the tool has processed an input grammar file * it will report variaous statistics about the parser, such as information * on cyclic DFAs, which rules may use backtracking, and so on. * * @parameter default-value="false" */ protected boolean report; /** * If set to true, then the ANTLR tool will print a version of the input * grammar which is devoid of any actions that may be present in the input file. * * @parameter default-value="false" */ protected boolean printGrammar; /** * If set to true, then the code generated by the ANTLR code generator will * be set to debug mode. This means that when run, the code will 'hang' and * wait for a debug connection on a TCP port (49100 by default). * * @parameter default-value="false" */ protected boolean debug; /** * If set to true, then then the generated parser will compute and report on * profile information at runtime. * * @parameter default-value="false" */ protected boolean profile; /** * If set to true then the ANTLR tool will generate a description of the nfa * for each rule in Dot format * * @parameter default-value="false" */ protected boolean nfa; /** * If set to true then the ANTLR tool will generate a description of the DFA * for each decision in the grammar in Dot format * * @parameter default-value="false" */ protected boolean dfa; /** * If set to true, the generated parser code will log rule entry and exit points * to stdout as an aid to debugging. * * @parameter default-value="false" */ protected boolean trace; /** * If this parameter is set, it indicates that any warning or error messages returned * by ANLTR, shoould be formatted in the specified way. Currently, ANTLR suports the * built-in formats of antlr, gnu and vs2005. * * @parameter default-value="antlr" */ protected String messageFormat; /** * If this parameter is set to true, then ANTLR will report all sorts of things * about what it is doing such as the names of files and the version of ANTLR and so on. * * @parameter default-value="true" */ protected boolean verbose; /** * The number of milliseconds ANTLR will wait for analysis of each * alternative in the grammar to complete before giving up. You may raise * this value if ANTLR gives up on a complicated alt and tells you that * there are lots of ambiguties, but you know that it just needed to spend * more time on it. Note that this is an absolute time and not CPU time. * * @parameter default-value="10000" */ private int conversionTimeout; /** * The number of alts, beyond which ANTLR will not generate a switch statement * for the DFA. * * @parameter default-value="300" */ private int maxSwitchCaseLabels; /** * The number of alts, below which ANTLR will not choose to generate a switch * statement over an if statement. */ private int minSwitchAlts; /* -------------------------------------------------------------------- * The following are Maven specific parameters, rather than specificlly * options that the ANTLR tool can use. */ /** * Provides an explicit list of all the grammars that should * be included in the generate phase of the plugin. Note that the plugin * is smart enough to realize that imported grammars should be included but * not acted upon directly by the ANTLR Tool. * * Unless otherwise specified, the include list scans for and includes all * files that end in ".g" in any directory beneath src/main/antlr3. Note that * this version of the plugin looks for the directory antlr3 and not the directory * antlr, so as to avoid clashes and confusion for projects that use both v2 and v3 grammars * such as ANTLR itself. * * @parameter */ protected Set includes = new HashSet(); /** * Provides an explicit list of any grammars that should be excluded from * the generate phase of the plugin. Files listed here will not be sent for * processing by the ANTLR tool. * * @parameter */ protected Set excludes = new HashSet(); /** * @parameter expression="${project}" * @required * @readonly */ protected MavenProject project; /** * Specifies the Antlr directory containing grammar files. For * antlr version 3.x we default this to a directory in the tree * called antlr3 because the antlr directory is occupied by version * 2.x grammars. * * @parameter default-value="${basedir}/src/main/antlr3" * @required */ private File sourceDirectory; /** * Location for generated Java files. For antlr version 3.x we default * this to a directory in the tree called antlr3 because the antlr * directory is occupied by version 2.x grammars. * * @parameter default-value="${project.build.directory}/generated-sources/antlr3" * @required */ private File outputDirectory; /** * Location for imported token files, e.g. .tokens and imported grammars. * Note that ANTLR will not try to process grammars that it finds to be imported * into other grammars (in the same processing session). * * @parameter default-value="${basedir}/src/main/antlr3/imports" */ private File libDirectory; public File getSourceDirectory() { return sourceDirectory; } public File getOutputDirectory() { return outputDirectory; } public File getLibDirectory() { return libDirectory; } void addSourceRoot(File outputDir) { project.addCompileSourceRoot(outputDir.getPath()); } /** * An instance of the ANTLR tool build */ protected Tool tool; /** * The main entry point for this Mojo, it is responsible for converting * ANTLR 3.x grammars into the target language specified by the grammar. * * @throws org.apache.maven.plugin.MojoExecutionException When something is disvocered such as a missing source * @throws org.apache.maven.plugin.MojoFailureException When something really bad happesn such as not being able to create the ANTLR Tool */ public void execute() throws MojoExecutionException, MojoFailureException { Log log = getLog(); // Check to see if the user asked for debug information, then dump all the // parameters we have picked up if they did. // if (log.isDebugEnabled()) { // Excludes // for (String e : (Set) excludes) { log.debug("ANTLR: Exclude: " + e); } // Includes // for (String e : (Set) includes) { log.debug("ANTLR: Include: " + e); } // Output location // log.debug("ANTLR: Output: " + outputDirectory); // Library directory // log.debug("ANTLR: Library: " + libDirectory); // Flags // log.debug("ANTLR: report : " + report); log.debug("ANTLR: printGrammar : " + printGrammar); log.debug("ANTLR: debug : " + debug); log.debug("ANTLR: profile : " + profile); log.debug("ANTLR: nfa : " + nfa); log.debug("ANTLR: dfa : " + dfa); log.debug("ANTLR: trace : " + trace); log.debug("ANTLR: messageFormat : " + messageFormat); log.debug("ANTLR: conversionTimeout : " + conversionTimeout); log.debug("ANTLR: maxSwitchCaseLabels : " + maxSwitchCaseLabels); log.debug("ANTLR: minSwitchAlts : " + minSwitchAlts); log.debug("ANTLR: verbose : " + verbose); } // Ensure that the output directory path is all in tact so that // ANTLR can just write into it. // File outputDir = getOutputDirectory(); if (!outputDir.exists()) { outputDir.mkdirs(); } // First thing we need is an instance of the ANTLR 3.1 build tool // try { // ANTLR Tool buld interface // tool = new Tool(); } catch (Exception e) { log.error("The attempt to create the ANTLR build tool failed, see exception report for details"); throw new MojoFailureException("Jim failed you!"); } // Next we need to set the options given to us in the pom into the // tool instance we have created. // tool.setConversionTimeout(conversionTimeout); tool.setDebug(debug); tool.setGenerate_DFA_dot(dfa); tool.setGenerate_NFA_dot(nfa); tool.setProfile(profile); tool.setReport(report); tool.setPrintGrammar(printGrammar); tool.setTrace(trace); tool.setVerbose(verbose); tool.setMessageFormat(messageFormat); tool.setMaxSwitchCaseLabels(maxSwitchCaseLabels); tool.setMinSwitchAlts(minSwitchAlts); // Where do we want ANTLR to produce its output? (Base directory) // if (log.isDebugEnabled()) { log.debug("Output directory base will be " + outputDirectory.getAbsolutePath()); } tool.setOutputDirectory(outputDirectory.getAbsolutePath()); // Tell ANTLR that we always want the output files to be produced in the output directory // using the same relative path as the input file was to the input directory. // tool.setForceRelativeOutput(true); // Where do we want ANTLR to look for .tokens and import grammars? // tool.setLibDirectory(libDirectory.getAbsolutePath()); if (!sourceDirectory.exists()) { if (log.isInfoEnabled()) { log.info("No ANTLR grammars to compile in " + sourceDirectory.getAbsolutePath()); } return; } else { if (log.isInfoEnabled()) { log.info("ANTLR: Processing source directory " + sourceDirectory.getAbsolutePath()); } } // Set working directory for ANTLR to be the base source directory // tool.setInputDirectory(sourceDirectory.getAbsolutePath()); try { // Now pick up all the files and process them with the Tool // processGrammarFiles(sourceDirectory, outputDirectory); } catch (InclusionScanException ie) { log.error(ie); throw new MojoExecutionException("Fatal error occured while evaluating the names of the grammar files to analyze"); } catch (Exception e) { getLog().error(e); throw new MojoExecutionException(e.getMessage()); } tool.process(); // If any of the grammar files caused errors but did nto throw exceptions // then we should have accumulated errors in the counts // if (tool.getNumErrors() > 0) { throw new MojoExecutionException("ANTLR caught " + tool.getNumErrors() + " build errors."); } // All looks good, so we need to tel Maven about the sources that // we just created. // if (project != null) { // Tell Maven that there are some new source files underneath // the output directory. // addSourceRoot(this.getOutputDirectory()); } } /** * * @param sourceDirectory * @param outputDirectory * @throws antlr.TokenStreamException * @throws antlr.RecognitionException * @throws java.io.IOException * @throws org.codehaus.plexus.compiler.util.scan.InclusionScanException */ private void processGrammarFiles(File sourceDirectory, File outputDirectory) throws TokenStreamException, RecognitionException, IOException, InclusionScanException { // Which files under the source set should we be looking for as grammar files // SourceMapping mapping = new SuffixMapping("g", Collections.EMPTY_SET); // What are the sets of includes (defaulted or otherwise). // Set includes = getIncludesPatterns(); // Now, to the excludes, we need to add the imports directory // as this is autoscanned for importd grammars and so is auto-excluded from the // set of gramamr fiels we shuold be analyzing. // excludes.add("imports/**"); SourceInclusionScanner scan = new SimpleSourceInclusionScanner(includes, excludes); scan.addSourceMapping(mapping); Set grammarFiles = scan.getIncludedSources(sourceDirectory, null); if (grammarFiles.isEmpty()) { if (getLog().isInfoEnabled()) { getLog().info("No grammars to process"); } } else { // Tell the ANTLR tool that we want sorted build mode // tool.setMake(true); // Iterate each grammar file we were given and add it into the tool's list of // grammars to process. // for (File grammar : (Set) grammarFiles) { if (getLog().isDebugEnabled()) { getLog().debug("Grammar file '" + grammar.getPath() + "' detected."); } String relPath = findSourceSubdir(sourceDirectory, grammar.getPath()) + grammar.getName(); if (getLog().isDebugEnabled()) { getLog().debug(" ... relative path is: " + relPath); } tool.addGrammarFile(relPath); } } } public Set getIncludesPatterns() { if (includes == null || includes.isEmpty()) { return Collections.singleton("**/*.g"); } return includes; } /** * Given the source directory File object and the full PATH to a * grammar, produce the path to the named grammar file in relative * terms to the sourceDirectory. This will then allow ANTLR to * produce output relative to the base of the output directory and * reflect the input organization of the grammar files. * * @param sourceDirectory The source directory File object * @param grammarFileName The full path to the input grammar file * @return The path to the grammar file relative to the source directory */ private String findSourceSubdir(File sourceDirectory, String grammarFileName) { String srcPath = sourceDirectory.getPath() + File.separator; if (!grammarFileName.startsWith(srcPath)) { throw new IllegalArgumentException("expected " + grammarFileName + " to be prefixed with " + sourceDirectory); } File unprefixedGrammarFileName = new File(grammarFileName.substring(srcPath.length())); return unprefixedGrammarFileName.getParent() + File.separator; } } antlr-3.2/antlr3-maven-plugin/pom.xml0000644000175000017500000002732211256466522017573 0ustar twernertwerner 4.0.0 org.antlr antlr3-maven-plugin maven-plugin 3.2 Maven plugin for ANTLR V3 2.0 http://antlr.org UTF-8 This is the brand new, re-written from scratch plugin for ANTLR v3. Previous valiant efforts all suffered from being unable to modify the ANTLR Tool itself to provide support not just for Maven oriented things but any other tool that might wish to invoke ANTLR without resorting to the command line interface. Rather than try to shoe-horn new code into the existing Mojo (in fact I think that by incorporating a patch supplied by someone I ended up with tow versions of the Mojo, I elected to rewrite everything from scratch, including the documentation, so that we might end up with a perfect Mojo that can do everything that ANTLR v3 supports such as imported grammar processing, proper support for library directories and locating token files from generated sources, and so on. In the end I decided to also change the the ANTLR Tool.java code so that it would be the provider of all the things that a build tool needs, rather than delegating things to 5 different tools. So, things like dependencies, dependency sorting, option tracking, generating sources and so on are all folded back in to ANTLR's Tool.java code, where they belong, and they now provide a public interface to anyone that might want to interface with them. One other goal of this rewrite was to completely document the whole thing to death. Hence even this pom has more comments than funcitonal elements, in case I get run over by a bus or fall off a cliff while skiing. Jim Idle - March 2009 Jim Idle http://www.temporal-wave.com Originator, version 3.1.3 Terence Parr http://antlr.org/wiki/display/~admin/Home Project lead - ANTLR David Holroyd http://david.holroyd.me.uk/ Originator - prior version Kenny MacDermid mailto:kenny "at" kmdconsulting.ca Contributor - prior versions hudson http://antlr.org/hudson/job/Maven_Plugin/lastSuccessfulBuild/ rss http://antlr.org/hudson/job/Maven_Plugin/rssAll JIRA http://antlr.org/jira/browse/ANTLR repo The BSD License http://www.antlr.org/LICENSE.txt antlr-repo ANTLR Testing repository scpexe://antlr.org/home/mavensync/antlr-repo antlr-snapshot ANTLR Testing Snapshot Repository scpexe://antlr.org/home/mavensync/antlr-snapshot antlr-repo ANTLR Maven Plugin Web Site scpexe://antlr.org/home/mavensync/antlr-maven-webs/antlr3-maven-plugin antlr-snapshot ANTLR Testing Snapshot Repository http://antlr.org/antlr-snapshot true always false 2009 http://antlr.markmail.org/ http://www.antlr.org/pipermail/antlr-interest/ ANTLR Users http://www.antlr.org/mailman/listinfo/antlr-interest/ http://www.antlr.org/mailman/options/antlr-interest/ antlr-interest@antlr.org ANTLR.org http://www.antlr.org org.apache.maven maven-plugin-api 2.0 compile org.apache.maven maven-project 2.0 org.codehaus.plexus plexus-compiler-api 1.5.3 org.antlr antlr 3.2 junit junit 4.5 test org.apache.maven.shared maven-plugin-testing-harness 1.0 test install org.apache.maven.wagon wagon-ssh-external 1.0-beta-2 maven-compiler-plugin 2.0.2 1.5 jsr14 org.apache.maven.plugins maven-site-plugin 2.0 org.apache.maven.plugins maven-project-info-reports-plugin 2.1.1 false antlr-3.2/BUILD.txt0000644000175000017500000005214511256465172014054 0ustar twernertwerner [The "BSD licence"] Copyright (c) 2005-2008 Terence Parr Maven Plugin - Copyright (c) 2009 Jim Idle All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ============================================================================ This file contains the build instructions for the ANTLR toolset as of version 3.1.3 and beyond. The ANTLR toolset must be built using the Maven build system as this build system updates the version numbers and controls the whole build process. However, if you just want the latest build and do not care to learn anything about Maven, then visit: http://antlr.org/hudson And download the current complete jar from the Tool_Daily Hudson project (just follow the links for last successful build). At the time of writing, the link for the last successful snapshot build is: http://antlr.org/hudson/job/ANTLR_Tool_Daily/lastSuccessfulBuild/org.antlr$antlr/ If you are looking for the latest released version of ANTLR, then visit the downloads page on the main antlr.org website. These instructions are mainly for the ANTLR development team, though you are free to build ANTLR yourself of course. Source code Structure ----------------------- The main development branch of ANTLR is stored within the Perforce SCM at: //depot/code/antlr/main/... release branches are stored in Perforce like so: //depot/code/antlr/release-3.1.3/... In this top level directory, you will find a master build file for Maven called pom.xml and you will also note that there are a number of subdirectories: tool - The ANTLR tool itself runtime/Java - The ANTLR Java runtime runtime/X - The runtime for language target X gunit - The grammar test tool antlr3-maven-plugin - The plugin tool for Maven that allows Maven projects to process ANTLR grammars. Each of these sub-directories also contains a file pom.xml that controls the build of each sub-component (or module in Maven parlance). Build Parameters ----------------- Alongside each pom.xml (other than for the antlr3-maven-plugin), you will see that there is a file called antlr.config. This file is called a filter and should contain a set of key/value pairs in the same manner as Java properties files: antlr.something="Some config thang!" When the build of any component happens, any values in the antlr.config for the master build file and any values in the antlr.config file for each component are made available to the build. This is mainly used by the resource processor, which will filter any file it finds under: src/main/resources/** and replace any references such as ${antlr.something} with the actual value at the time of the build. Building -------- Building ANTLR is trivial, assuming that you have loaded Maven version 2.0.9 or better on to your build system and installed it as explained here: http://maven.apache.org/download.html If you are unfamiliar with Maven (and even if you are), the best resource for learning about it is The Definitive Guide: http://www.sonatype.com/books/maven-book/reference/public-book.html The instructions here assume that Maven is installed and working correctly. If this is the first time you have built the ANTLR toolset, you will possibly need to install the master pom in your local repository (however the build may be able to locate this in the ANTLR snapshot or release repository). If you try to build sub-modules on their own (as in run the mvn command in the sub directory for that tool, such as runtime/Java), and you receive a message that maven cannot find the master pom, then execute this in the main (or release) directory: mvn -N install This command will install the master build pom in your local maven repository (it's ~/.m2 on UNIX) and individual builds of sub-modules will now work correctly. To build then, simply cd into the master build directory (e.g. $P4ROOT//code/antlr/main) and type: mvn -Dmaven.test.skip=true Assuming that everything is correctly installed and synchronized, then ANTLR will build and skip any unit tests in the modules (the ANTLR tool tests can take a long time). This command will build each of the tools in the correct order and will create the jar artifacts of all the components in your local development Maven repository (which takes precedence over remote repositories by default). At the end of the build you should see: [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] ------------------------------------------------------------------------ [INFO] ANTLR Master build control POM ........................ SUCCESS [1.373s] [INFO] Antlr 3 Runtime ....................................... SUCCESS [0.879s] [INFO] ANTLR Grammar Tool .................................... SUCCESS [5.431s] [INFO] Maven plugin for ANTLR V3 ............................. SUCCESS [1.277s] [INFO] ANTLR gUnit ........................................... SUCCESS [1.566s] [INFO] Maven plugin for gUnit ANTLR V3 ....................... SUCCESS [0.079s] [INFO] ------------------------------------------------------------------------ [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11 seconds However, unless you are using Maven exclusively in your projects, you will most likely want to build the ANTLR Uber Jar, which is an executable jar containing all the components that ANTLR needs to build and run parsers (note that at runtime, you need only the runtime components you use, such as the Java runtime and say stringtemplate). Because the Uber jar is not something we want to deploy to Maven repositories it is built with a special invocation of Maven: mvn -Dmaven.test.skip=true package assembly:assembly Note that Maven will appear to build everything twice, which is a quirk of how it calculates the dependencies and makes sure it has everything packaged up so it can build the uber-jar assembly. Somewhere in the build output (towards the end), you will find a line like this: [INFO] Building jar: /home/jimi/antlrsrc/code/antlr/main/target/antlr-master-3.1.3-SNAPSHOT-completejar.jar This is the executable jar that you need and you can either copy it somewhere or, like me, you can create this script (assuming UNIX) somewhere in your PATH: #! /bin/bash java -jar ~/antlrsrc/code/antlr/main/target/antlr-master-3.1.3-SNAPSHOT-completejar.jar $* Version Numbering ------------------- The first and Golden rule is that any pom files stored under the main branch of the toolset should never be modified to contain a release version number. They should always contain a.b.c-SNAPSHOT (e.g. 3.1.3-SNAPSHOT). Only release branches should have their pom version numbers set to a release version. You can release as many SNAPSHOTS as you like, but only one release version. However, release versions may be updated with a patch level: 3.1.3-1, 3.1.3-2 and so on. Fortunately, Maven helps us with the version numbering in a number of ways. Firstly, the pom.xml files for the various modules do not specify a version of the artifacts themselves. They pick up their version number from the master build pom. However, there is a catch, because they need to know what version of the parent pom they inherit from and so they DO mention the version number. However, this does prevent accidentally releasing different versions of sub-modules than the master pom describes. Fortunately once again, Maven has a neat way of helping us with change the version. All you need do is check out all the pom.xml files from perforce, then modify the a.b.c-SNAPSHOT in the master pom. When the version number is correct in the master pom, you make sure your working directory is the location of the master pom and type: mvn versions:update-child-modules This command will then update the child pom.xml files to reflect the version number defined in the master pom.xml. There is unfortunately one last catch here though and that is that the antlr3-maven-plugin and the gunit-maven-plugin are not able to use the parent pom. The reason for this is subtle but makes sense as doing so would create a circular dependency between the ANTLR tool (which uses the plugin to build its own grammar files), and the plugins (which uses the tool to build grammar files and gunit to test). This catch-22 situation means that the pom.xml file in the antlr3-maven-plugin directory and the one in the gunit-maven-plugin directory MUST be updated manually (or we must write a script to do this). Finally, we need to remember that because the tool is dependent on the antlr3-maven-plugin and the plugin is itself dependent on the the tool, that we must manually update the versions of each that they reference. So, when we bump the version of the toolset to say 3.1.4-SNAPSHOT, we need to change the antlr3-maven-plugin pom.xml and the gunit-maven-plugin pom.xml to reference that version of the antlr tool. The tool itself is always built with the prior released version of the plugin, so when we release we must change the main branch of the plugin to use the newly released version of the plugin. This is covered in the release checklist. Deploying ---------- Deploying the tools at the current version is relatively easy, but to deploy to the ANTLR repositories (snapshot or release) you must have been granted access to the antlr.org server and supplied an ssh key. Few people will have this access of course. Assuming that you have ssh access to antlr.org, then you will need to do the following before deployment will authorize and work correctly (UNIX assumed here): $ eval `ssh-agent` Agent PID nnnnn $ ssh-add Enter passphrase for /home/you/.ssh/id_rsa: Identity added.... Next, because we do not publish access information for antlr.org, you will need to configure the repository server names locally. You do this by creating (or adding to) the file: ~/.m2/settings.xml Which should look like this: antlr-snapshot mavensync passphrase for your private key /home/youruserlogin/.ssh/id_rsa antlr-repo mavensync passphrase for your private key /home/youruserlogin/.ssh/id_rsa When this configuration is in place, you will be able to deploy the components, either individually or from the master directory: mvn -Dmaven.test.skip=true deploy You will then see lots of information about checking existing version information and so on, and the components will be deployed. Note that so long as the artifacts are versioned with a.b.c-SNAPSHOT then deployment will always be to the development snapshot directory. When the artifacts are versioned with a release version then deployment will be to the antlr.org release repository, which will then be mirrored around the world. It is important not to deploy a release until you have built and tested it to your satisfaction. Release Checklist ------------------ Here is the procedure to use to make a release of ANTLR. Note that we should really use the mvn release:release command, but the perforce plugin for Maven is not commercial quality and I want to rewrite it. For this checklist, let's assume that the current development version of ANTLR is 3.1.3-SNAPSHOT. This means that it will probably (but not necessarily) become release version 3.1.3 and that the development version will bump to 3.1.4-SNAPSHOT. 0) Run a build of the main branch and check that it is builds and passes as many tests as you want it to. 1) First make a branch from main into the target release directory. Then submit this to perforce. You could change versions numbers before submitting, but doing that in separate stages will keep things sane; --- Use main development branch from here --- 2) Before we deploy the release, we want to update the versions of the development branch, so we don't deploy what is now the new release as an older snapshot (this is not super important, but procedure is good right?). Check out all the pom.xml files (and if you are using any antlr.config parameters that must change, then do that too). 3) Edit the master pom.xml in the main directory and change the version from 3.1.3-SNAPSHOT to 3.1.4-SNAPSHOT. 4) Edit the pom.xml file for antlr3-maven-plugin under the main directory and change the version from 3.1.3-SNAPSHOT to 3.1.4-SNAPSHOT. Do the same for the pom.xml in the gunit-maven-plugin directory. 5) Now (from the main directory), run the command: mvn versions:update-child-modules You should see: [INFO] [versions:update-child-modules] [INFO] Module: gunit [INFO] Parent is org.antlr:antlr-master:3.1.4-SNAPSHOT [INFO] Module: runtime/Java [INFO] Parent is org.antlr:antlr-master:3.1.4-SNAPSHOT [INFO] Module: tool [INFO] Parent is org.antlr:antlr-master:3.1.4-SNAPSHOT 6) Run a build of the main branch: mvn -Dmaven.test.skip=true All should be good. 7) Submit the pom changes of the main branch to perforce. 8) Deploy the new snapshot as a placeholder for the next release. It will go to the snapshot repository of course: mvn -N deploy mvn -Dmaven.test.skip=true deploy 9) You are now finished with the main development branch and should change working directories to the release branch you made earlier. --- Use release branch from here --- 10) Check out all the pom.xml files in the release branch (and if you are using any antlr.config parameters that must change, then do that too). 11) Edit the master pom.xml in the release-3.1.3 directory and change the version from 3.1.3-SNAPSHOT to 3.1.3. 12) Edit the pom.xml file for antlr3-maven-plugin under the release-3.1.3 directory and change the version from 3.1.3-SNAPSHOT to 3.1.3. Also change the version of the tool that the this pom.xml references from 3.1.3-SNAPSHOT to 3.1.3 as we are now releasing the plugin of course and it needs to reference the version we are about to release. You will find this reference in the dependencies section of the antlr3-maven-plugin pom.xml. Also change the version references in the pom for gunit-maven-plugin. 13) Now (from the release-3.1.3 directory), run the command: mvn versions:update-child-modules You should see: [INFO] [versions:update-child-modules] [INFO] Module: gunit [INFO] Parent was org.antlr:antlr-master:3.1.3-SNAPSHOT, now org.antlr:antlr-master:3.1.3 [INFO] Module: runtime/Java [INFO] Parent was org.antlr:antlr-master:3.1.3-SNAPSHOT, now org.antlr:antlr-master:3.1.3 [INFO] Module: tool [INFO] Parent was org.antlr:antlr-master:3.1.3-SNAPSHOT, now org.antlr:antlr-master:3.1.3 14) Run a build of the release-3.1.3 branch: mvn # Note I am letting unit tests run here! All should be good, or as good as it gets ;-) 15) Submit the pom changes of the release-3.1.3 branch to perforce. 16) Deploy the new release (this is it guys, make sure you are happy): mvn -N deploy mvn -Dmaven.test.skip=true deploy Note that we must skip the tests as Maven will not let you deploy releases that fail any junit tests. 17) The final step is that we must update the main branch pom.xml for the tool to reference the newly release version of the antlr3-maven-plugin. This is because each release of ANTLR is built with the prior release of ANTLR, and we have just released a new version. Edit the pom.xml for the tool (main/tool/pom.xml) under the main (that's the MAIN branch, not the release branch) and find the dependency reference to the antlr plugin. If you just released say 3.1.3, then the tool should now reference version 3.1.3 of the plugin. Having done this, you should probably rebuild the main branch and let it run the junit tests. Later, I will automate this dependency update as mvn can do this for us. 18) Having deployed the release to maven, you will want to create the uber jar for the new release, to make it downloadable from the antlr.org website. This is a repeat of the earlier described step to build the uber jar: mvn =Dmaven.test.skip=true package assembly:assembly MAven will produce the uber jar in the target directory: antlr-master-3.1.3-completejar.jar And this is the complete jar that can be downloaded from the web site. You may wish to produce an md5 checksum to go with the jar: md5sum target/antlr-master-3.1.3-completejar.jar xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx target/antlr-master-3.1.4-SNAPSHOT-completejar.jar The command you just ran will also produce a second jar: antlr-master-3.1.3-src.jar This is the source code for everythign you just deployed and can be unjarred and built from scratch using the very procedures described here, which means you will now be reading this BUILD.txt file for ever. 19) Reward anyone around you with good beer. Miscellany ----------- It was a little tricky to get all the interdependencies correct because ANTLR builds itself using itself and the maven plugin references the ANTLR Tool as well. Hence the maven tool is not a child project of the master pom.xml file, even though it is built by it. An observant person will not that when the assembly:assembly phase is run, that it invokes the build of the ANTLR tool using the version of the Maven plugin that it has just built, and this results in the plugin using the version of ANTLR tool that it has just built. This is safe because everything will already be up to date and so we package up the version of the tool that we expect, but the Maven plugin we deploy will use the correct version of ANTLR, even though there is technically a circular dependency. The master pom.xml does give us a way to cause the build of the ANTLR tool to use itself to build itself. This is because in dependencyManagement in the master pom.xml, we can reference the current version of the Tool and the Maven plugin, even though in the pom.xml for the tool itself refers to the previous version of the plugin. What happens is that if we first cd into the tool and maven directories and build ANTLR, it will build itself with the prior version and this will deploy locally (.m2). We can then clean build from the master pom and when ANTLR asks for the prior version of the tool, the master pom.xml will override it and build with the interim versions we just built manually. However, strictly speaking, we need a third build where we rebuild the tool again with the version of the tool that was built with itself and not deploy the version that was built by the version of itself that was built by a prior version of itself. I decided that this was not particularly useful and complicates things too much. Building with a prior version of the tool is fine and if there was ever a need to, we could release twice in quick succession. I have occasionally seen the MAven reactor screw up (or perhaps it is the ANTLR tool) when building. If this happens you will see an ANTLR Panic - cannot find en.stg message. If this happens to you, then just rerun the build and it will eventually work. Jim Idle - March 2009 antlr-3.2/antlrsources.xml0000644000175000017500000003006511256465172015717 0ustar twernertwerner src jar true org.antlr:gunit src src pom.xml CHANGES.txt LICENSE.txt README.txt antlr.config org.antlr:antlr-runtime runtime/Java src src pom.xml doxyfile antlr.config org.antlr:antlr tool src src pom.xml CHANGES.txt LICENSE.txt README.txt antlr.config org.antlr:antlr3-maven-plugin src src pom.xml org.antlr:maven-gunit-plugin gunit-maven-plugin src src pom.xml pom.xml antlrjar.xml antlrsources.xml BUILD.txt antlr-3.2/runtime/0000755000175000017500000000000011410174107014113 5ustar twernertwernerantlr-3.2/runtime/Java/0000755000175000017500000000000011410174107014774 5ustar twernertwernerantlr-3.2/runtime/Java/antlr.config0000644000175000017500000000000011256465214017303 0ustar twernertwernerantlr-3.2/runtime/Java/src/0000755000175000017500000000000011410174107015563 5ustar twernertwernerantlr-3.2/runtime/Java/src/main/0000755000175000017500000000000011410174107016507 5ustar twernertwernerantlr-3.2/runtime/Java/src/main/java/0000755000175000017500000000000011410174107017430 5ustar twernertwernerantlr-3.2/runtime/Java/src/main/java/org/0000755000175000017500000000000011410174107020217 5ustar twernertwernerantlr-3.2/runtime/Java/src/main/java/org/antlr/0000755000175000017500000000000011410174107021337 5ustar twernertwernerantlr-3.2/runtime/Java/src/main/java/org/antlr/runtime/0000755000175000017500000000000011410174107023022 5ustar twernertwernerantlr-3.2/runtime/Java/src/main/java/org/antlr/runtime/ClassicToken.java0000644000175000017500000000747011256465214026271 0ustar twernertwerner/* [The "BSD licence"] Copyright (c) 2005-2008 Terence Parr All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.antlr.runtime; /** A Token object like we'd use in ANTLR 2.x; has an actual string created * and associated with this object. These objects are needed for imaginary * tree nodes that have payload objects. We need to create a Token object * that has a string; the tree node will point at this token. CommonToken * has indexes into a char stream and hence cannot be used to introduce * new strings. */ public class ClassicToken implements Token { protected String text; protected int type; protected int line; protected int charPositionInLine; protected int channel=DEFAULT_CHANNEL; /** What token number is this from 0..n-1 tokens */ protected int index; public ClassicToken(int type) { this.type = type; } public ClassicToken(Token oldToken) { text = oldToken.getText(); type = oldToken.getType(); line = oldToken.getLine(); charPositionInLine = oldToken.getCharPositionInLine(); channel = oldToken.getChannel(); } public ClassicToken(int type, String text) { this.type = type; this.text = text; } public ClassicToken(int type, String text, int channel) { this.type = type; this.text = text; this.channel = channel; } public int getType() { return type; } public void setLine(int line) { this.line = line; } public String getText() { return text; } public void setText(String text) { this.text = text; } public int getLine() { return line; } public int getCharPositionInLine() { return charPositionInLine; } public void setCharPositionInLine(int charPositionInLine) { this.charPositionInLine = charPositionInLine; } public int getChannel() { return channel; } public void setChannel(int channel) { this.channel = channel; } public void setType(int type) { this.type = type; } public int getTokenIndex() { return index; } public void setTokenIndex(int index) { this.index = index; } public CharStream getInputStream() { return null; } public void setInputStream(CharStream input) { } public String toString() { String channelStr = ""; if ( channel>0 ) { channelStr=",channel="+channel; } String txt = getText(); if ( txt!=null ) { txt = txt.replaceAll("\n","\\\\n"); txt = txt.replaceAll("\r","\\\\r"); txt = txt.replaceAll("\t","\\\\t"); } else { txt = ""; } return "[@"+getTokenIndex()+",'"+txt+"',<"+type+">"+channelStr+","+line+":"+getCharPositionInLine()+"]"; } } antlr-3.2/runtime/Java/src/main/java/org/antlr/runtime/DFA.java0000644000175000017500000001660511256465214024301 0ustar twernertwerner/* [The "BSD licence"] Copyright (c) 2005-2008 Terence Parr All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.antlr.runtime; /** A DFA implemented as a set of transition tables. * * Any state that has a semantic predicate edge is special; those states * are generated with if-then-else structures in a specialStateTransition() * which is generated by cyclicDFA template. * * There are at most 32767 states (16-bit signed short). * Could get away with byte sometimes but would have to generate different * types and the simulation code too. For a point of reference, the Java * lexer's Tokens rule DFA has 326 states roughly. */ public class DFA { protected short[] eot; protected short[] eof; protected char[] min; protected char[] max; protected short[] accept; protected short[] special; protected short[][] transition; protected int decisionNumber; /** Which recognizer encloses this DFA? Needed to check backtracking */ protected BaseRecognizer recognizer; public static final boolean debug = false; /** From the input stream, predict what alternative will succeed * using this DFA (representing the covering regular approximation * to the underlying CFL). Return an alternative number 1..n. Throw * an exception upon error. */ public int predict(IntStream input) throws RecognitionException { if ( debug ) { System.err.println("Enter DFA.predict for decision "+decisionNumber); } int mark = input.mark(); // remember where decision started in input int s = 0; // we always start at s0 try { while ( true ) { if ( debug ) System.err.println("DFA "+decisionNumber+" state "+s+" LA(1)="+(char)input.LA(1)+"("+input.LA(1)+ "), index="+input.index()); int specialState = special[s]; if ( specialState>=0 ) { if ( debug ) { System.err.println("DFA "+decisionNumber+ " state "+s+" is special state "+specialState); } s = specialStateTransition(specialState,input); if ( debug ) { System.err.println("DFA "+decisionNumber+ " returns from special state "+specialState+" to "+s); } if ( s==-1 ) { noViableAlt(s,input); return 0; } input.consume(); continue; } if ( accept[s] >= 1 ) { if ( debug ) System.err.println("accept; predict "+accept[s]+" from state "+s); return accept[s]; } // look for a normal char transition char c = (char)input.LA(1); // -1 == \uFFFF, all tokens fit in 65000 space if (c>=min[s] && c<=max[s]) { int snext = transition[s][c-min[s]]; // move to next state if ( snext < 0 ) { // was in range but not a normal transition // must check EOT, which is like the else clause. // eot[s]>=0 indicates that an EOT edge goes to another // state. if ( eot[s]>=0 ) { // EOT Transition to accept state? if ( debug ) System.err.println("EOT transition"); s = eot[s]; input.consume(); // TODO: I had this as return accept[eot[s]] // which assumed here that the EOT edge always // went to an accept...faster to do this, but // what about predicated edges coming from EOT // target? continue; } noViableAlt(s,input); return 0; } s = snext; input.consume(); continue; } if ( eot[s]>=0 ) { // EOT Transition? if ( debug ) System.err.println("EOT transition"); s = eot[s]; input.consume(); continue; } if ( c==(char)Token.EOF && eof[s]>=0 ) { // EOF Transition to accept state? if ( debug ) System.err.println("accept via EOF; predict "+accept[eof[s]]+" from "+eof[s]); return accept[eof[s]]; } // not in range and not EOF/EOT, must be invalid symbol if ( debug ) { System.err.println("min["+s+"]="+min[s]); System.err.println("max["+s+"]="+max[s]); System.err.println("eot["+s+"]="+eot[s]); System.err.println("eof["+s+"]="+eof[s]); for (int p=0; p0) { recognizer.state.failed=true; return; } NoViableAltException nvae = new NoViableAltException(getDescription(), decisionNumber, s, input); error(nvae); throw nvae; } /** A hook for debugging interface */ protected void error(NoViableAltException nvae) { ; } public int specialStateTransition(int s, IntStream input) throws NoViableAltException { return -1; } public String getDescription() { return "n/a"; } /** Given a String that has a run-length-encoding of some unsigned shorts * like "\1\2\3\9", convert to short[] {2,9,9,9}. We do this to avoid * static short[] which generates so much init code that the class won't * compile. :( */ public static short[] unpackEncodedString(String encodedString) { // walk first to find how big it is. int size = 0; for (int i=0; i= n ) { //System.out.println("char LA("+i+")=EOF; p="+p); return CharStream.EOF; } //System.out.println("char LA("+i+")="+(char)data[p+i-1]+"; p="+p); //System.out.println("LA("+i+"); p="+p+" n="+n+" data.length="+data.length); return data[p+i-1]; } public int LT(int i) { return LA(i); } /** Return the current input symbol index 0..n where n indicates the * last symbol has been read. The index is the index of char to * be returned from LA(1). */ public int index() { return p; } public int size() { return n; } public int mark() { if ( markers==null ) { markers = new ArrayList(); markers.add(null); // depth 0 means no backtracking, leave blank } markDepth++; CharStreamState state = null; if ( markDepth>=markers.size() ) { state = new CharStreamState(); markers.add(state); } else { state = (CharStreamState)markers.get(markDepth); } state.p = p; state.line = line; state.charPositionInLine = charPositionInLine; lastMarker = markDepth; return markDepth; } public void rewind(int m) { CharStreamState state = (CharStreamState)markers.get(m); // restore stream state seek(state.p); line = state.line; charPositionInLine = state.charPositionInLine; release(m); } public void rewind() { rewind(lastMarker); } public void release(int marker) { // unwind any other markers made after m and release m markDepth = marker; // release this marker markDepth--; } /** consume() ahead until p==index; can't just set p=index as we must * update line and charPositionInLine. */ public void seek(int index) { if ( index<=p ) { p = index; // just jump; don't update stream state (line, ...) return; } // seek forward, consume until p hits index while ( p { /** dynamically-sized buffer of elements */ protected List data = new ArrayList(); /** index of next element to fill */ protected int p = 0; public void reset() { p = 0; data.clear(); } /** Get and remove first element in queue */ public T remove() { T o = get(0); p++; // have we hit end of buffer? if ( p == data.size() ) { // if so, it's an opportunity to start filling at index 0 again clear(); // size goes to 0, but retains memory } return o; } public void add(T o) { data.add(o); } public int size() { return data.size() - p; } public T head() { return get(0); } /** Return element i elements ahead of current element. i==0 gets * current element. This is not an absolute index into the data list * since p defines the start of the real list. */ public T get(int i) { if ( p+i >= data.size() ) { throw new NoSuchElementException("queue index "+(p+i)+" > size "+data.size()); } return data.get(p+i); } public void clear() { p = 0; data.clear(); } /** Return string of current buffer contents; non-destructive */ public String toString() { StringBuffer buf = new StringBuffer(); int n = size(); for (int i=0; i extends FastQueue { public static final int UNINITIALIZED_EOF_ELEMENT_INDEX = Integer.MAX_VALUE; /** Set to buffer index of eof when nextElement returns eof */ protected int eofElementIndex = UNINITIALIZED_EOF_ELEMENT_INDEX; /** Returned by nextElement upon end of stream; we add to buffer also */ public T eof = null; /** Track the last mark() call result value for use in rewind(). */ protected int lastMarker; /** tracks how deep mark() calls are nested */ protected int markDepth = 0; public LookaheadStream(T eof) { this.eof = eof; } public void reset() { eofElementIndex = UNINITIALIZED_EOF_ELEMENT_INDEX; super.reset(); } /** Implement nextElement to supply a stream of elements to this * lookahead buffer. Return eof upon end of the stream we're pulling from. */ public abstract T nextElement(); /** Get and remove first element in queue; override FastQueue.remove() */ public T remove() { T o = get(0); p++; // have we hit end of buffer and not backtracking? if ( p == data.size() && markDepth==0 ) { // if so, it's an opportunity to start filling at index 0 again clear(); // size goes to 0, but retains memory } return o; } /** Make sure we have at least one element to remove, even if EOF */ public void consume() { sync(1); remove(); } /** Make sure we have 'need' elements from current position p. Last valid * p index is data.size()-1. p+need-1 is the data index 'need' elements * ahead. If we need 1 element, (p+1-1)==p must be < data.size(). */ public void sync(int need) { int n = (p+need-1) - data.size() + 1; // how many more elements we need? if ( n > 0 ) fill(n); // out of elements? } /** add n elements to buffer */ public void fill(int n) { for (int i=1; i<=n; i++) { T o = nextElement(); if ( o==eof ) { data.add(eof); eofElementIndex = data.size()-1; } else data.add(o); } } //public boolean hasNext() { return eofElementIndex!=UNINITIALIZED_EOF_ELEMENT_INDEX; } /** Size of entire stream is unknown; we only know buffer size from FastQueue */ public int size() { throw new UnsupportedOperationException("streams are of unknown size"); } public Object LT(int k) { if ( k==0 ) { return null; } if ( k<0 ) { return LB(-k); } //System.out.print("LT(p="+p+","+k+")="); if ( (p+k-1) >= eofElementIndex ) { // move to super.LT return eof; } sync(k); return get(k-1); } /** Look backwards k nodes */ protected Object LB(int k) { if ( k==0 ) { return null; } if ( (p-k)<0 ) { return null; } return get(-k); } public Object getCurrentSymbol() { return LT(1); } public int index() { return p; } public int mark() { markDepth++; lastMarker = index(); return lastMarker; } public void release(int marker) { // no resources to release } public void rewind(int marker) { markDepth--; seek(marker); // assume marker is top // release(marker); // waste of call; it does nothing in this class } public void rewind() { seek(lastMarker); // rewind but do not release marker } /** Seek to a 0-indexed position within data buffer. Can't handle * case where you seek beyond end of existing buffer. Normally used * to seek backwards in the buffer. Does not force loading of nodes. */ public void seek(int index) { p = index; } }antlr-3.2/runtime/Java/src/main/java/org/antlr/runtime/misc/IntArray.java0000644000175000017500000000550011256465214026363 0ustar twernertwerner/* [The "BSD licence"] Copyright (c) 2005-2008 Terence Parr All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.antlr.runtime.misc; /** A dynamic array that uses int not Integer objects. In principle this * is more efficient in time, but certainly in space. * * This is simple enough that you can access the data array directly, * but make sure that you append elements only with add() so that you * get dynamic sizing. Make sure to call ensureCapacity() when you are * manually adding new elements. * * Doesn't impl List because it doesn't return objects and I mean this * really as just an array not a List per se. Manipulate the elements * at will. This has stack methods too. * * When runtime can be 1.5, I'll make this generic. */ public class IntArray { public static final int INITIAL_SIZE = 10; public int[] data; protected int p = -1; public void add(int v) { ensureCapacity(p+1); data[++p] = v; } public void push(int v) { add(v); } public int pop() { int v = data[p]; p--; return v; } /** This only tracks elements added via push/add. */ public int size() { return p; } public void clear() { p = -1; } public void ensureCapacity(int index) { if ( data==null ) { data = new int[INITIAL_SIZE]; } else if ( (index+1)>=data.length ) { int newSize = data.length*2; if ( index>newSize ) { newSize = index+1; } int[] newData = new int[newSize]; System.arraycopy(data, 0, newData, 0, data.length); data = newData; } } } antlr-3.2/runtime/Java/src/main/java/org/antlr/runtime/misc/Stats.java0000644000175000017500000000772111256465214025737 0ustar twernertwerner/* [The "BSD licence"] Copyright (c) 2005-2008 Terence Parr All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.antlr.runtime.misc; import java.io.*; /** Stats routines needed by profiler etc... // note that these routines return 0.0 if no values exist in the X[] // which is not "correct", but it is useful so I don't generate NaN // in my output */ public class Stats { public static final String ANTLRWORKS_DIR = "antlrworks"; /** Compute the sample (unbiased estimator) standard deviation following: * * Computing Deviations: Standard Accuracy * Tony F. Chan and John Gregg Lewis * Stanford University * Communications of ACM September 1979 of Volume 22 the ACM Number 9 * * The "two-pass" method from the paper; supposed to have better * numerical properties than the textbook summation/sqrt. To me * this looks like the textbook method, but I ain't no numerical * methods guy. */ public static double stddev(int[] X) { int m = X.length; if ( m<=1 ) { return 0; } double xbar = avg(X); double s2 = 0.0; for (int i=0; i=0.0 ) { return xbar / m; } return 0.0; } public static int min(int[] X) { int min = Integer.MAX_VALUE; int m = X.length; if ( m==0 ) { return 0; } for (int i=0; i max ) { max = X[i]; } } return max; } public static int sum(int[] X) { int s = 0; int m = X.length; if ( m==0 ) { return 0; } for (int i=0; i"; } } class InsertBeforeOp extends RewriteOperation { public InsertBeforeOp(int index, Object text) { super(index,text); } public int execute(StringBuffer buf) { buf.append(text); buf.append(((Token)tokens.get(index)).getText()); return index+1; } } /** I'm going to try replacing range from x..y with (y-x)+1 ReplaceOp * instructions. */ class ReplaceOp extends RewriteOperation { protected int lastIndex; public ReplaceOp(int from, int to, Object text) { super(from,text); lastIndex = to; } public int execute(StringBuffer buf) { if ( text!=null ) { buf.append(text); } return lastIndex+1; } public String toString() { return ""; } } class DeleteOp extends ReplaceOp { public DeleteOp(int from, int to) { super(from, to, null); } public String toString() { return ""; } } /** You may have multiple, named streams of rewrite operations. * I'm calling these things "programs." * Maps String (name) -> rewrite (List) */ protected Map programs = null; /** Map String (program name) -> Integer index */ protected Map lastRewriteTokenIndexes = null; public TokenRewriteStream() { init(); } protected void init() { programs = new HashMap(); programs.put(DEFAULT_PROGRAM_NAME, new ArrayList(PROGRAM_INIT_SIZE)); lastRewriteTokenIndexes = new HashMap(); } public TokenRewriteStream(TokenSource tokenSource) { super(tokenSource); init(); } public TokenRewriteStream(TokenSource tokenSource, int channel) { super(tokenSource, channel); init(); } public void rollback(int instructionIndex) { rollback(DEFAULT_PROGRAM_NAME, instructionIndex); } /** Rollback the instruction stream for a program so that * the indicated instruction (via instructionIndex) is no * longer in the stream. UNTESTED! */ public void rollback(String programName, int instructionIndex) { List is = (List)programs.get(programName); if ( is!=null ) { programs.put(programName, is.subList(MIN_TOKEN_INDEX,instructionIndex)); } } public void deleteProgram() { deleteProgram(DEFAULT_PROGRAM_NAME); } /** Reset the program so that no instructions exist */ public void deleteProgram(String programName) { rollback(programName, MIN_TOKEN_INDEX); } public void insertAfter(Token t, Object text) { insertAfter(DEFAULT_PROGRAM_NAME, t, text); } public void insertAfter(int index, Object text) { insertAfter(DEFAULT_PROGRAM_NAME, index, text); } public void insertAfter(String programName, Token t, Object text) { insertAfter(programName,t.getTokenIndex(), text); } public void insertAfter(String programName, int index, Object text) { // to insert after, just insert before next index (even if past end) insertBefore(programName,index+1, text); //addToSortedRewriteList(programName, new InsertAfterOp(index,text)); } public void insertBefore(Token t, Object text) { insertBefore(DEFAULT_PROGRAM_NAME, t, text); } public void insertBefore(int index, Object text) { insertBefore(DEFAULT_PROGRAM_NAME, index, text); } public void insertBefore(String programName, Token t, Object text) { insertBefore(programName, t.getTokenIndex(), text); } public void insertBefore(String programName, int index, Object text) { //addToSortedRewriteList(programName, new InsertBeforeOp(index,text)); RewriteOperation op = new InsertBeforeOp(index,text); List rewrites = getProgram(programName); op.instructionIndex = rewrites.size(); rewrites.add(op); } public void replace(int index, Object text) { replace(DEFAULT_PROGRAM_NAME, index, index, text); } public void replace(int from, int to, Object text) { replace(DEFAULT_PROGRAM_NAME, from, to, text); } public void replace(Token indexT, Object text) { replace(DEFAULT_PROGRAM_NAME, indexT, indexT, text); } public void replace(Token from, Token to, Object text) { replace(DEFAULT_PROGRAM_NAME, from, to, text); } public void replace(String programName, int from, int to, Object text) { if ( from > to || from<0 || to<0 || to >= tokens.size() ) { throw new IllegalArgumentException("replace: range invalid: "+from+".."+to+"(size="+tokens.size()+")"); } RewriteOperation op = new ReplaceOp(from, to, text); List rewrites = getProgram(programName); op.instructionIndex = rewrites.size(); rewrites.add(op); } public void replace(String programName, Token from, Token to, Object text) { replace(programName, from.getTokenIndex(), to.getTokenIndex(), text); } public void delete(int index) { delete(DEFAULT_PROGRAM_NAME, index, index); } public void delete(int from, int to) { delete(DEFAULT_PROGRAM_NAME, from, to); } public void delete(Token indexT) { delete(DEFAULT_PROGRAM_NAME, indexT, indexT); } public void delete(Token from, Token to) { delete(DEFAULT_PROGRAM_NAME, from, to); } public void delete(String programName, int from, int to) { replace(programName,from,to,null); } public void delete(String programName, Token from, Token to) { replace(programName,from,to,null); } public int getLastRewriteTokenIndex() { return getLastRewriteTokenIndex(DEFAULT_PROGRAM_NAME); } protected int getLastRewriteTokenIndex(String programName) { Integer I = (Integer)lastRewriteTokenIndexes.get(programName); if ( I==null ) { return -1; } return I.intValue(); } protected void setLastRewriteTokenIndex(String programName, int i) { lastRewriteTokenIndexes.put(programName, new Integer(i)); } protected List getProgram(String name) { List is = (List)programs.get(name); if ( is==null ) { is = initializeProgram(name); } return is; } private List initializeProgram(String name) { List is = new ArrayList(PROGRAM_INIT_SIZE); programs.put(name, is); return is; } public String toOriginalString() { return toOriginalString(MIN_TOKEN_INDEX, size()-1); } public String toOriginalString(int start, int end) { StringBuffer buf = new StringBuffer(); for (int i=start; i>=MIN_TOKEN_INDEX && i<=end && itokens.size()-1 ) end = tokens.size()-1; if ( start<0 ) start = 0; if ( rewrites==null || rewrites.size()==0 ) { return toOriginalString(start,end); // no instructions to execute } StringBuffer buf = new StringBuffer(); // First, optimize instruction stream Map indexToOp = reduceToSingleOperationPerIndex(rewrites); // Walk buffer, executing instructions and emitting tokens int i = start; while ( i <= end && i < tokens.size() ) { RewriteOperation op = (RewriteOperation)indexToOp.get(new Integer(i)); indexToOp.remove(new Integer(i)); // remove so any left have index size-1 Token t = (Token) tokens.get(i); if ( op==null ) { // no operation at that index, just dump token buf.append(t.getText()); i++; // move to next token } else { i = op.execute(buf); // execute operation and skip } } // include stuff after end if it's last index in buffer // So, if they did an insertAfter(lastValidIndex, "foo"), include // foo if end==lastValidIndex. if ( end==tokens.size()-1 ) { // Scan any remaining operations after last token // should be included (they will be inserts). Iterator it = indexToOp.values().iterator(); while (it.hasNext()) { RewriteOperation op = (RewriteOperation)it.next(); if ( op.index >= tokens.size()-1 ) buf.append(op.text); } } return buf.toString(); } /** We need to combine operations and report invalid operations (like * overlapping replaces that are not completed nested). Inserts to * same index need to be combined etc... Here are the cases: * * I.i.u I.j.v leave alone, nonoverlapping * I.i.u I.i.v combine: Iivu * * R.i-j.u R.x-y.v | i-j in x-y delete first R * R.i-j.u R.i-j.v delete first R * R.i-j.u R.x-y.v | x-y in i-j ERROR * R.i-j.u R.x-y.v | boundaries overlap ERROR * * I.i.u R.x-y.v | i in x-y delete I * I.i.u R.x-y.v | i not in x-y leave alone, nonoverlapping * R.x-y.v I.i.u | i in x-y ERROR * R.x-y.v I.x.u R.x-y.uv (combine, delete I) * R.x-y.v I.i.u | i not in x-y leave alone, nonoverlapping * * I.i.u = insert u before op @ index i * R.x-y.u = replace x-y indexed tokens with u * * First we need to examine replaces. For any replace op: * * 1. wipe out any insertions before op within that range. * 2. Drop any replace op before that is contained completely within * that range. * 3. Throw exception upon boundary overlap with any previous replace. * * Then we can deal with inserts: * * 1. for any inserts to same index, combine even if not adjacent. * 2. for any prior replace with same left boundary, combine this * insert with replace and delete this replace. * 3. throw exception if index in same range as previous replace * * Don't actually delete; make op null in list. Easier to walk list. * Later we can throw as we add to index -> op map. * * Note that I.2 R.2-2 will wipe out I.2 even though, technically, the * inserted stuff would be before the replace range. But, if you * add tokens in front of a method body '{' and then delete the method * body, I think the stuff before the '{' you added should disappear too. * * Return a map from token index to operation. */ protected Map reduceToSingleOperationPerIndex(List rewrites) { //System.out.println("rewrites="+rewrites); // WALK REPLACES for (int i = 0; i < rewrites.size(); i++) { RewriteOperation op = (RewriteOperation)rewrites.get(i); if ( op==null ) continue; if ( !(op instanceof ReplaceOp) ) continue; ReplaceOp rop = (ReplaceOp)rewrites.get(i); // Wipe prior inserts within range List inserts = getKindOfOps(rewrites, InsertBeforeOp.class, i); for (int j = 0; j < inserts.size(); j++) { InsertBeforeOp iop = (InsertBeforeOp) inserts.get(j); if ( iop.index >= rop.index && iop.index <= rop.lastIndex ) { // delete insert as it's a no-op. rewrites.set(iop.instructionIndex, null); } } // Drop any prior replaces contained within List prevReplaces = getKindOfOps(rewrites, ReplaceOp.class, i); for (int j = 0; j < prevReplaces.size(); j++) { ReplaceOp prevRop = (ReplaceOp) prevReplaces.get(j); if ( prevRop.index>=rop.index && prevRop.lastIndex <= rop.lastIndex ) { // delete replace as it's a no-op. rewrites.set(prevRop.instructionIndex, null); continue; } // throw exception unless disjoint or identical boolean disjoint = prevRop.lastIndex rop.lastIndex; boolean same = prevRop.index==rop.index && prevRop.lastIndex==rop.lastIndex; if ( !disjoint && !same ) { throw new IllegalArgumentException("replace op boundaries of "+rop+ " overlap with previous "+prevRop); } } } // WALK INSERTS for (int i = 0; i < rewrites.size(); i++) { RewriteOperation op = (RewriteOperation)rewrites.get(i); if ( op==null ) continue; if ( !(op instanceof InsertBeforeOp) ) continue; InsertBeforeOp iop = (InsertBeforeOp)rewrites.get(i); // combine current insert with prior if any at same index List prevInserts = getKindOfOps(rewrites, InsertBeforeOp.class, i); for (int j = 0; j < prevInserts.size(); j++) { InsertBeforeOp prevIop = (InsertBeforeOp) prevInserts.get(j); if ( prevIop.index == iop.index ) { // combine objects // convert to strings...we're in process of toString'ing // whole token buffer so no lazy eval issue with any templates iop.text = catOpText(iop.text,prevIop.text); // delete redundant prior insert rewrites.set(prevIop.instructionIndex, null); } } // look for replaces where iop.index is in range; error List prevReplaces = getKindOfOps(rewrites, ReplaceOp.class, i); for (int j = 0; j < prevReplaces.size(); j++) { ReplaceOp rop = (ReplaceOp) prevReplaces.get(j); if ( iop.index == rop.index ) { rop.text = catOpText(iop.text,rop.text); rewrites.set(i, null); // delete current insert continue; } if ( iop.index >= rop.index && iop.index <= rop.lastIndex ) { throw new IllegalArgumentException("insert op "+iop+ " within boundaries of previous "+rop); } } } // System.out.println("rewrites after="+rewrites); Map m = new HashMap(); for (int i = 0; i < rewrites.size(); i++) { RewriteOperation op = (RewriteOperation)rewrites.get(i); if ( op==null ) continue; // ignore deleted ops if ( m.get(new Integer(op.index))!=null ) { throw new Error("should only be one op per index"); } m.put(new Integer(op.index), op); } //System.out.println("index to op: "+m); return m; } protected String catOpText(Object a, Object b) { String x = ""; String y = ""; if ( a!=null ) x = a.toString(); if ( b!=null ) y = b.toString(); return x+y; } protected List getKindOfOps(List rewrites, Class kind) { return getKindOfOps(rewrites, kind, rewrites.size()); } /** Get all operations before an index of a particular kind */ protected List getKindOfOps(List rewrites, Class kind, int before) { List ops = new ArrayList(); for (int i=0; i=MIN_TOKEN_INDEX && i<=end && imaxRuleInvocationDepth ) { maxRuleInvocationDepth = ruleLevel; } } /** Track memoization; this is not part of standard debug interface * but is triggered by profiling. Code gen inserts an override * for this method in the recognizer, which triggers this method. */ public void examineRuleMemoization(IntStream input, int ruleIndex, String ruleName) { //System.out.println("examine memo "+ruleName); int stopIndex = parser.getRuleMemoization(ruleIndex, input.index()); if ( stopIndex==BaseRecognizer.MEMO_RULE_UNKNOWN ) { //System.out.println("rule "+ruleIndex+" missed @ "+input.index()); numMemoizationCacheMisses++; numGuessingRuleInvocations++; // we'll have to enter } else { // regardless of rule success/failure, if in cache, we have a cache hit //System.out.println("rule "+ruleIndex+" hit @ "+input.index()); numMemoizationCacheHits++; } } public void memoize(IntStream input, int ruleIndex, int ruleStartIndex, String ruleName) { // count how many entries go into table //System.out.println("memoize "+ruleName); numMemoizationCacheEntries++; } public void exitRule(String grammarFileName, String ruleName) { ruleLevel--; } public void enterDecision(int decisionNumber) { decisionLevel++; int startingLookaheadIndex = parser.getTokenStream().index(); //System.out.println("enterDecision "+decisionNumber+" @ index "+startingLookaheadIndex); lookaheadStack.add(new Integer(startingLookaheadIndex)); } public void exitDecision(int decisionNumber) { //System.out.println("exitDecision "+decisionNumber); // track how many of acyclic, cyclic here as we don't know what kind // yet in enterDecision event. if ( parser.isCyclicDecision ) { numCyclicDecisions++; } else { numFixedDecisions++; } lookaheadStack.remove(lookaheadStack.size()-1); // pop lookahead depth counter decisionLevel--; if ( parser.isCyclicDecision ) { if ( numCyclicDecisions>=decisionMaxCyclicLookaheads.length ) { int[] bigger = new int[decisionMaxCyclicLookaheads.length*2]; System.arraycopy(decisionMaxCyclicLookaheads,0,bigger,0,decisionMaxCyclicLookaheads.length); decisionMaxCyclicLookaheads = bigger; } decisionMaxCyclicLookaheads[numCyclicDecisions-1] = maxLookaheadInCurrentDecision; } else { if ( numFixedDecisions>=decisionMaxFixedLookaheads.length ) { int[] bigger = new int[decisionMaxFixedLookaheads.length*2]; System.arraycopy(decisionMaxFixedLookaheads,0,bigger,0,decisionMaxFixedLookaheads.length); decisionMaxFixedLookaheads = bigger; } decisionMaxFixedLookaheads[numFixedDecisions-1] = maxLookaheadInCurrentDecision; } parser.isCyclicDecision = false; // can't nest so just reset to false maxLookaheadInCurrentDecision = 0; } public void consumeToken(Token token) { //System.out.println("consume token "+token); lastTokenConsumed = (CommonToken)token; } /** The parser is in a decision if the decision depth > 0. This * works for backtracking also, which can have nested decisions. */ public boolean inDecision() { return decisionLevel>0; } public void consumeHiddenToken(Token token) { //System.out.println("consume hidden token "+token); lastTokenConsumed = (CommonToken)token; } /** Track refs to lookahead if in a fixed/nonfixed decision. */ public void LT(int i, Token t) { if ( inDecision() ) { // get starting index off stack int stackTop = lookaheadStack.size()-1; Integer startingIndex = (Integer)lookaheadStack.get(stackTop); // compute lookahead depth int thisRefIndex = parser.getTokenStream().index(); int numHidden = getNumberOfHiddenTokens(startingIndex.intValue(), thisRefIndex); int depth = i + thisRefIndex - startingIndex.intValue() - numHidden; /* System.out.println("LT("+i+") @ index "+thisRefIndex+" is depth "+depth+ " max is "+maxLookaheadInCurrentDecision); */ if ( depth>maxLookaheadInCurrentDecision ) { maxLookaheadInCurrentDecision = depth; } } } /** Track backtracking decisions. You'll see a fixed or cyclic decision * and then a backtrack. * * enter rule * ... * enter decision * LA and possibly consumes (for cyclic DFAs) * begin backtrack level * mark m * rewind m * end backtrack level, success * exit decision * ... * exit rule */ public void beginBacktrack(int level) { //System.out.println("enter backtrack "+level); numBacktrackDecisions++; } /** Successful or not, track how much lookahead synpreds use */ public void endBacktrack(int level, boolean successful) { //System.out.println("exit backtrack "+level+": "+successful); decisionMaxSynPredLookaheads.add( new Integer(maxLookaheadInCurrentDecision) ); } /* public void mark(int marker) { int i = parser.getTokenStream().index(); System.out.println("mark @ index "+i); synPredLookaheadStack.add(new Integer(i)); } public void rewind(int marker) { // pop starting index off stack int stackTop = synPredLookaheadStack.size()-1; Integer startingIndex = (Integer)synPredLookaheadStack.get(stackTop); synPredLookaheadStack.remove(synPredLookaheadStack.size()-1); // compute lookahead depth int stopIndex = parser.getTokenStream().index(); System.out.println("rewind @ index "+stopIndex); int depth = stopIndex - startingIndex.intValue(); System.out.println("depth of lookahead for synpred: "+depth); decisionMaxSynPredLookaheads.add( new Integer(depth) ); } */ public void recognitionException(RecognitionException e) { numberReportedErrors++; } public void semanticPredicate(boolean result, String predicate) { if ( inDecision() ) { numSemanticPredicates++; } } public void terminate() { String stats = toNotifyString(); try { Stats.writeReport(RUNTIME_STATS_FILENAME,stats); } catch (IOException ioe) { System.err.println(ioe); ioe.printStackTrace(System.err); } System.out.println(toString(stats)); } public void setParser(DebugParser parser) { this.parser = parser; } // R E P O R T I N G public String toNotifyString() { TokenStream input = parser.getTokenStream(); for (int i=0; i