bisonc++-4.13.01/CLASSES0000644000175000017500000000132412633316117013272 0ustar frankfrankatdollar block atdollar element firstset element symbol firstset terminal symbol production block terminal nonterminal production symtab nonterminal rules nonterminal grammar rules lookaheadset grammar item lookaheadset rrdata lookaheadset rmreduction rmshift stateitem item rrdata rmshift rmreduction rrconflict stateitem enumsolution statetype next enumsolution statetype stateitem srconflict next state srconflict rrconflict writer state options scanner block options generator writer options parser scanner rules symtab bisonc++-4.13.01/CLASSES.bobcat0000644000175000017500000000013212633316117014517 0ustar frankfrankalign arg datetime exception indent mstream pattern ranger stat string table tablesupport bisonc++-4.13.01/INSTALL0000644000175000017500000001222712633316117013307 0ustar frankfrankTo Install bisonc++ by hand instead of using the binary distribution perform the following steps: 0. Bisonc++ and its construction depends, in addition to the normally standard available system software on specific software and versions which is documented in the file `required', in particular the Bobcat library. (If you compile the bobcat library yourself, note that bisonc++ does not use the SSL, Milter and Xpointer classes; they may --as far as bisonc++ is concerned-- be left out of the library) 1. To install bobcat icmake should be used, for which a top-level script (build) and support scripts in the ./icmake/ directory are available. Icmake is available on many architectures. 2. Inspect the values of the variables in the file INSTALL.im, in particular the #defines below COMPONENTS TO INSTALL. Modify these #defines when necessary. 3. Inspect the path to icmake at the top of the `build' script. By default it is /usr/bin/icmake, but some installations use /usr/local/bin/icmake Adapt when necessary. 4. Run ./build program [strip] to compile bisonc++. The argument `strip' is optional and strips symbolic information from the final executable. 5. If you installed Yodl then you can create the documentation: ./build man builds the man-pages, and ./build manual builds the manual. 6. Run (probably as root) ./build install 'LOG:path' 'what' 'base' to install components of Bisonc++. Here, 'LOG:path' is an optional item specifying the absolute or relative path of a log file to contain a log of all installed files (see also the next item). Using LOG:~/.bisoncpp usually works well. Do not put any blanks between LOG: and the path specification, or protect the LOG: specification by quotes. 'what' specifies what you want to install. Specify: x, to install all components, or specify a combination of: a (additional documentation), b (binary program), d (standard documentation), m (man-pages) s (skeleton files) u (user guide) E.g., use ./build install bs 'base' if you only want to be able to run bisonc++, and want it to be installed below 'base'. When requesting non-existing elements (e.g., ./build install x was requested, but the man-pages weren't constructed) then these non-existing elements are silently ignored by the installation process. 'base' is optional and specifies the base directory below which the requested files are installed. This base directory is prepended to the paths #defined in the INSTALL.im file. If 'base' is not specified, then INSTALL.im's #defined paths are used as-is. 7. Uninstalling previously installed components of Bisonc++ is easy if a log path (LOG:...) was specified at the `./build install ...' command. In that case, run the command ./build uninstall logpath where 'logpath' specifies the location of the logfile that was written by ./build install. Modified files and non-empty directories are not removed, but the logfile itself is removed following the uninstallation. 8. Following the installation nothing in the directory tree which contains this file (i.e., INSTALL) is required for the proper functioning of bisonc++, so consider removing it. If you only want to remove left-over files from the build-process, just run ./build distclean ----------------------------------------------------------------------------- NOTE: the parser-class header file generated by bisonc++ before version 4.02.00 should have the prototypes of some of their private members modified. Simply replacing the `support functions for parse()' section shown at the end of the header file by the following lines should make your header file up-to-date again. Bisonc++ will not by itself rewrite the parser class headers to prevent undoing any modifications you may have implemented in parser-class header files: // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); The function print__) is defined by bisonc++, the default implementation of exceptionHandler__() can be added to the parser's internal header file: inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // optionally re-implement to handle exceptions thrown // by actions } bisonc++-4.13.01/INSTALL.im0000644000175000017500000000440712634610065013714 0ustar frankfrank // The name of the program and the support directories as installed by // the 'build install' command. Normally there is no reason for changing // this #define #define PROGRAM "bisonc++" // The following /bin/cp option is used to keep, rather than follow // symbolic references. If your installation doesn't support these flags, // then change them into available ones. // -P, --no-dereference // never follow symbolic links in SOURCE // --preserve[=ATTR_LIST] // preserve the specified attributes (default: // mode,ownership,timestamps), if possible additional // attributes: context, links, all // -d same as --no-dereference --preserve=links #define CPOPTS "-d" // The CXX, CXXFLAGS, and LDFLAGS #defines are overruled by identically // named environment variables: // the compiler to use. #define CXX "g++" // the compiler options to use. #define CXXFLAGS "--std=c++14 -Wall -O2 -fdiagnostics-color=never" #define LDFLAGS "" // flags passed to the linker // COMPONENTS TO INSTALL // ===================== // For an operational non-Debian installation, you probably must be // `root'. // If necessary, adapt the #defines below to your situation. // The provided locations are used by Debian Linux. // With 'build install' you can dynamically specify a location to prepend // to the locations configured here, and select which components you want // to install // ONLY USE ABSOLUTE DIRECTORY NAMES: // the directory where additional documentation is stored #define ADD "/usr/share/doc/"${PROGRAM}"-doc" // the full pathname of the final program #define BINARY "/usr/bin/"${PROGRAM} // the directory where the standard documentation is stored #define DOC "/usr/share/doc/"${PROGRAM} // the directory whre the manual page is stored #define MAN "/usr/share/man/man1" // the directory whre the user-guide is stored #define UGUIDE "/usr/share/doc/"${PROGRAM}"-doc/manual" // the directory where the skeleton files are installed // Recompile options/data.cc if the skeleton location changes #define SKEL "/usr/share/bisonc++" bisonc++-4.13.01/LICENSE0000644000175000017500000010451312635000474013261 0ustar frankfrank GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . bisonc++-4.13.01/README-0.980000644000175000017500000000237712633316117013537 0ustar frankfrankBisonc++ 0.98 is a complete rewrite of the well-known bison(++) parser generator. See the manual page for details. Several demo programs are found below the bisonc++/documentation/examples directory. In that directory there's a README file describing the purposes of the various examples. The (original) bison documentation is provided in html format in the directory bisonc++/documentation/html. The Debian package (bisonc++*.deb) installs that documentation below /usr/share/doc/bisonc++. Note that bisonc++'s specifications differs from bison's specifications, in particular with respect to the declaration section. The production rule specification section is, however, practically identical to the one defined for bison. There are some differences, though. Consult bisonc++'s manpage (installed by the Debian package, otherwise available in the directory bisonc++/documentation/man) for the differences. Since it is a complete rewrite it is quite likely that bugs will be encountered. Bison itself offers an extensive input grammar, and I may easily have overlooked various subtleties. Do not hesitate to contact me when you encounter a bug. A (small) grammar illustrating your point is then always extremely useful. Frank. May 2005-April 2006 (f.b.brokken@rug.nl) bisonc++-4.13.01/README.class-setup0000644000175000017500000000504612633316117015401 0ustar frankfrankThis hierarchy defines the header and the implementation dependencies among the classes. The topmost class is not dependent on any class. Lines should be read down-to-up to find the the lowest class in the hierarchy on which a given class depends (e.g., LookaheadSet depends on Grammar, and possibly uses the classes used by Grammar; RmReduction does not depend on any other class). AtDollar Element | | Block FirstSet | | +-------+ Symbol | | | | | Terminal | | | | +---------------+ | | | Production | | | NonTerminal | | | +---------------+ | | | | Symtab Rules | | | | +--------+------+ | | | | | Grammar | | | | | LookaheadSet | | | | | +-----------+ | | | | | | Item RRData RmShift RmReduction | | | | | | | | +-----------+-----------+-----------+ | | | | | EnumSolution StateType StateItem | | | | | | | | | +--+ | | | | | | | | +---------------+---------------+ | | | | | | | Next | | | | | | Options | SrConflict RRConflict | | | | | | | | +------------------+ | | | | | +----------- | --------+ State | | | | | +---+ | | Writer | | | | Scanner | +----------------+ | | | +----------------+ | | | Parser Generator | | +--------------------------+ | bisonc++ bisonc++-4.13.01/README.flex0000644000175000017500000000110512633316117014064 0ustar frankfrank About the lexical scanner ========================= Bisonc++ uses a lexical scanner generated by flex. The file bisonc++/scanner/yylex.cc depends on a file FlexLexer.h having different contents over different `flex' versions. Bisonc++ therefore also depends on flex being installed, callable as the program `flex'. Starting with Bisonc++ version 2.4.2 the build process will create scanner/yylex.cc from the file scanner/lexer and scanner/yylex.cc is no longer included in Bisonc++'s distribution. Frank. bisonc++-4.13.01/README.insertions0000644000175000017500000000104412633316117015325 0ustar frankfrankClasses Item, Next, NonTerminal, StateItem and Terminal use a static inserter() function to define the way the object is inserted into a stream. These functions expect addresses of member functions defining a particular insertion as their arguments. In the class header files the members that define an insertion type are listed immediately following the inserter() prototype. Currently only the Terminal class supports a manipulator functionality for these member functions, allowing a insertion mode-switch within a single insertion statement. bisonc++-4.13.01/README.lookaheads0000644000175000017500000000053512633316117015246 0ustar frankfrankThe algorithm that is used to compute the look-ahead sets of all items of all states of a grammar is described in chapter 7 (The Bisonc++ Parser Algorithm) of Bisonc++'s user mannual, as well as in the state/determinelasets.cc source file. Please refer to these documentation sources for further information about the LA sets computation algorithm. bisonc++-4.13.01/README.output0000644000175000017500000000063412633316117014474 0ustar frankfrankOutput to the xxx.output file by the --verbose or --construction flags are mostly handled from the State::allStates() function. This function calls State's operator<<, which is initialized by a pointer pointing to either insertStd, insertExt or skipInsertion. Inspect state/insertstd.cc for the standard insertion method and state/insertext.cc for the extensive insertion method. bisonc++-4.13.01/README.parser0000644000175000017500000000413212633316117014425 0ustar frankfrankBisonc++ 2.0.0 uses a grammar specification file (parser/grammar) defining the input language that it will recognize. The grammar specification file was initially written to be used with Bisonc++ 1.6.1, but once Bisonc++ 2.0.0 was available it was split into various subfiles, each having a well defined function. As a result, the individual grammar specification files are fairly small, which facilitates understanding of the grammar. The main grammar specification file is the file parser/grammar, the subdirectory parser/spec contains the grammar specification subfiles: parser/spec/messages parser/spec/precedence parser/spec/optrules parser/spec/directives parser/spec/symbols parser/spec/rules parser/spec/productionlist parser/spec/auxiliary parser/grammar: A short file, defining tokens, the semantic value union, the grammar's start rule and one support rule used by the start rule. parser/spec/messages: Defines all rules that expect a token and assign an appropriate text to the syntactic error message variable `d_msg'. All these rules end in _t. Another set of rules (ending in _m) merely set the `d_msg' data member. parser/spec/precedence: Defines support rules when recognizing the precedence (LEFT, RIGHT, NONASSOC) and TOKEN terminals directives. These rules end in _p. parser/spec/optrules Defines rules for all tokens that may optionally be given. These rules all start with the phrase `opt'. parser/spec/directives Defines the syntax for each of Bisonc++'s directives. parser/spec/symbols All precedence directives, the %token directive and the %type directive all expect one or more symbols. The `symbols' rule is defined in this file. parser/spec/rules Defines the grammar recognizing Bisonc++'s rules parser/spec/productionlist Defines the grammar recognizing the production rules recognized by Bisonc++. parser/spec/auxiliary Defines rules that are used in at least two specification files. These rules all end in _a. bisonc++-4.13.01/README.polymorphic0000644000175000017500000000031112633316117015471 0ustar frankfrankA short demo program is found in documentation/manual/grammar/poly/; A demo illustrating the way polymorphic semantic type assignment works is provided in documentation/manual/grammar/essence/demo.cc. bisonc++-4.13.01/README.states0000644000175000017500000002215612633316117014442 0ustar frankfrankThis file contains a short description about the way states and look-aheads (LA's) are constructed. The example does not cover new information. In fact, it follows the construction of the states of the grammar given in example 4.42 in Aho et al. A more extensive description is found in documentation/manual/algorithm.yo The grammar has the following production rules (numbered 1-3): 1 S' -> S 2 S -> CC 3 C -> cC 4 C -> d Production rule 1 is added to the grammar, representing the so-called augmented grammar. When this rule is recognized at end-of-input (represented by $) then the input is grammatically correct, and the parser will flag `success'. $ is called a `look ahead' (LA) token. LA's may be in sets, which are used to determine whether a rule (when potentially recognized) will be reduced or not. So, when recognizing S, it is only reduced to S' (by rule S' -> S) if $ is indeed the next input token (i.e., if the end-of-input has been reached. With the starting rule an `item' is associated. An items consists of a rule, having a `dot' and a LA-set. The dot (.) represents the point to where input has been recognized. So, initially we have, starting from rule 1 and using [] to contain the LA token(s): S' -> . S [$] This is the starting point, state 0, and this point of departure defines the `kernel' of a state. To the kernel additional (production) rules and new LA-sets may be added as follows: New rules: If a . is followed by a non-terminal all production rules of that non-terminal are added to the item as new items having dots in postion 0. This process is followed for the newly added production rules as well until no more rules are added. LA-sets: Here is where it gets a bit complicated. To find LA-sets the FIRST function must be defined first. The FIRST function defines the set of terminal tokens that can be encountered at the start of a (series of) grammatical tokens. This FIRST-set is determined as follows: - if X is a terminal token x, FIRST(X) is [x]. - if X is empty FIRST(X) is [e] (the empty set) - if X is a production rule X1 X2 ... Xn then: FIRST(X) is FIRST(X1), except [e]. If [e] was removed, FIRST(X2) is added to the set, again removing [e], if it was now added to the set. This process is repeated until an Xi was without [e], or until FIRST(Xn) was added to the set. In this final case, if FIRST(Xn) contains [e] it is kept For our little grammar, we have the following FIRST sets: S' -> S FIRST(S') = FIRST(S) S -> CC FIRST(S) = FIRST(C) C -> cC FIRST(cC) = [c] C -> d FIRST(d) = [d] so, FIRST(S') = FIRST(S) = FIRST(C) = FIRST(S) = [cd] Back to the LA-sets: In a state having an item: A -> x . B y, [a] with A, B non-terminals, x, y possibly empty series of terminal symbols and [a] the LA-set belonging to this item (so, x . B y is a production rule of A with a dot), all B's production rules are added to the state having FIRST(ya) as its LA-set. Having constructed a state, the dots are moved one symbol to the right. Every unique grammar symbol thus shifted represents a `shift' in the parsing process to another state. A shift may result in a shift to an earlier state. This earlier state may thus see its LA-set expanded, which expansion must then propagate to all states derived from that state. When no shift is possible, a reduction takes place. If a state has only a single reduction, then this becomes the default reduction (on any symbol not resulting in a shift). If a state has multiple reductions, then this is ok if their LA-sets are different. Otherwise the grammar contains a reduce-reduce conflict. Analogously, if a reduction on a certain LA-set is indicated, but there's also a shift required for (one or more) terminal tokens in the LA-set, then the grammar contains a shift-reduce conflict. At each rule in each state actions/gotos are specified: `goto' a state if the symbol following a dot is a nonterminal, `shift' to a state if the symbol following a dot is a terminal, or reduce according to a production rule if the dot is at the end of a production rule. This indicates that a rule has been recognized, and if an action block has been defined for such a production rule, it is executed at that point (i.e., at the point where the reduction takes place). Now it's time to start constructing states, called S0, S1, S2, etc. Production rules are numbered by their original number. S0: 1 S' -> . S [$] On S goto S1 for S' -> . S [$] compute, matching A -> x . B y, [a], FIRST(y a) as FIRST( $), which is: [$], LA for the next S rule: add S-rule: 2 S -> . C C [$] On C goto S2 for S -> . C C [$] compute, matching A -> x . B y, [a], FIRST(y a) as FIRST(C $), which is: [cd], LA for the next C rules: add C-rules: 3 C -> . c C [cd] On c shift S3 4 C -> . d [cd] On d shift S4 having (of course) the same LA set. `Of course', since the LA set does not depend on this production rule, but on the rule causing this addition. So, once a nonterminal is added, the LA-set of its production rules is determined once. Continuing this way, each of the previous state's items is processed in turn: S1: 1 S' -> S . [$] Reduce to S' by rule 1. This is the ACCEPT state S2: 2 S -> C . C [$] On C goto S5 for S -> C . C [$] compute, matching A -> x . B y, [a] FIRST(y a) as FIRST( $), which is: [$], LA for the next C rules: 3 C -> . c C [$] On c shift S3 (Adding [$] to S3's LA-set) 4 C -> . d [$] On d shift S4 (Adding [$] to S4's LA-set) S3: 3 C -> c . C [cd]+[$] from S2 On C goto S6 (Adding [cd] to S5's LA-set) for C -> c . C [cd]+[$] compute, matching A -> x . B y, [a] FIRST(y a ) as FIRST( [cd]), which is: [cd]+[$] 3 C -> . c C [cd]+[$] On c shift S3 4 C -> . d [cd]+[$] On d shift S4 S4: 4 C -> d . [cd]+[$] from S2 Reduce to C by rule 4 S5: 2 S -> C C . [$] Reduce to S by rule 2 S6: 3 C -> c C . [cd]+[$] Reduce to C by rule 3 Now, the action/goto table can be summarized: ------------------------------------ action goto ------------------------------------ State c d $ S C ------------------------------------ 0 s3 s4 s1 s2 1 acc 2 s3 s4 s5 3 s3 s4 s6 4 r4 r4 r4 5 r2 [d] 6 r3 r3 r3 ------------------------------------ [d]: use this by default for every next token If a grammatical rule has an empty production the same procedure is followed. Consider: 1 S' -> S 2 S -> S E 3 S -> 4 E -> n 5 E -> i (representing a grammar accepting possibly empty series of numbers and identifiers). The grammar is constructed as follows: FIRST(E) = [in], FIRST(S') = FIRST(S) = [in$] S0: S' -> . S [$] On S: goto S1 Obtaining [$] from the above rule: S -> . S E [$] On S: goto S1 S -> . [$] Default: reduce to S S1: S' -> S . [$] On $: ACCEPT S -> S . E [$] On E: goto S2 Add E-rules: match A -> b . B y [a] with: S -> S . E [$] E -> . n [$] On n shift S3 E -> . i [$] On i shift S4 S2: S -> S E . [$] REDUCE S3: E -> n . [$] REDUCE S4: E -> i . [$] REDUCE Action/Goto table: ------------------------------------ action goto ------------------------------------ State i n $ S E ------------------------------------ 0 r3[d] s1 1 s3 s4 acc s2 2 r2[d] 3 r4[d] 4 r5[d] ------------------------------------ bisonc++-4.13.01/README.states-and-conflicts0000644000175000017500000001730312633316117017162 0ustar frankfrankThe information in this file is closely related to what's happening in state/define.cc. Refer to define.cc for the implementation of the process described below. All functions mentioned below are defined by the class State. Defining states proceeds as follows: 0. The initial state is constructed. It contains the augmented grammar's production rule. This part is realized by the static member initialState(); Following this, states are constructed via State::construct, initially called for the initial state (state 0) State construction (i.e., State::construct) consists of two steps: 1. All items are defined in State::setItems as either reducible items (no transitions) or non-reducible items (when the dot is not beyond the last item's symbol). Whenever there are non-reducible items 2. 1. From the state's kernel item(s) all implied rules are added as additional state items. This results in a vector of (kernel/non-kernel) items, >>!! as well as per item the numbers of the items that are affected by this item. This information is used later on to propagate the LA's. This part is realized by the member setItems() This fills the StateItem::Vector vector. A StateItem contains 1. an item (containing a production rule, dot position, and LA set) 2. a size_t vector of indices of `dependent' items, indicating which items have LA sets that depend on the current item (StateItem::d_child). 3. The size_t field `d_nextIdx' holds the index in d_nextVector, allowing quick access of the d_nextVector element defining the state having the current item as its kernel. A next value 'npos' indicates that the item does not belong to a next-kernel. E.g., StateItem: ------------------------------------- item LA-set dependent next stateitems state ------------------------------------- S* -> . S, EOF, (1, 2) 0 ... ------------------------------------- Also, State::d_nextVector vector is filled. A Next element contains 0. The symbol on which the transition takes place 1. The number of the next state 2. The indices of the StateItem::Vector defining the next state's kernel E.g., Next: ------------------------------- On next next kernel Symbol state from items ------------------------------- S ? (0, 1) ... ------------------------------- Empty production rules don't require special handling as they won't appear in the Next table, since there's no transition on them. Next, from these facilities all states are constructed. LA propagation is performed after the state construction State construction takes place (in the while loop in State::define.cc following the initial state construction). 2. After the state construction the LA sets of the items are computed. State 0's single kernel item is S_$: . S, and represents the augmented grammar's start rule, just before observing the grammar's start symbol. The LA sets are computed by State::determineLAsets. The file state/determinelasets.cc contains a description of the implemented algorithm, as does chapter 7 (The Bisonc++ Parser Algorithm) of Bisonc++'s User Guide. The reader is referred to these documentation sources for further information about the LA set computation algorithm. 3. Once all states have been constructed, conflicts are located and solved. If a state contains conflict, they are resolved and information about these conflicts is stored in an SRConflict::Vector and/or RRConflict::Vector. Conflicts are identified and resolved by the member: State::checkConflicts(); 4. S/R conflicts are handled by the d_srConflict object. This object received at construction time a context consisting of the state's d_itemVector and d_nextVector as well as d_reducible containing all indices of reducible items. Each of these indices is the index of a reducible item which is, together with a context consisting of the state's d_itemVector and d_nextVector, passed to Next::checkShiftReduceConflict(), which solves the observed shift-reduce conflicts. Here is how this is done: Assume a state's itemVector holds the following StateItems: 0: [P11 3] expression -> expression '-' expression . { EOLN '+' '-' '*' '/' ')' } 0, 1, () -1 1: [P10 1] expression -> expression . '+' expression { EOLN '+' '-' '*' '/' ')' } 0, 0, () 0 2: [P11 1] expression -> expression . '-' expression { EOLN '+' '-' '*' '/' ')' } 0, 0, () 1 3: [P12 1] expression -> expression . '*' expression { EOLN '+' '-' '*' '/' ')' } 0, 0, () 2 4: [P13 1] expression -> expression . '/' expression { EOLN '+' '-' '*' '/' ')' } 0, 0, () 3 and the associated nextVector is: 0: On '+' to state 15 with (1 ) 1: On '-' to state 16 with (2 ) 2: On '*' to state 17 with (3 ) 3: On '/' to state 18 with (4 ) Conflicts are inspected for all reducible items. Here the reducible item is the item having index 0. Inspection involves (but see below for an extension of this process when the LHS of a reducible item differs from the LHS of a non-reducible item): 1. The nextVector's symbols are searched for in the LA set of the reduction item (so, subsequently '+', '-', '*' and '/' are searched for in the LA set of itemVector[0]). 2. In this case, all are found and depending on the token's priority and the rule's priority either a shift or a reduce is selected. Production rules received their priority setting either explicitly (using %prec) or from their first terminal token. See also rules/updateprecedences.cc What happens if neither occurs? In a rule like 'expr: term' there is no first terminal token and there is no %prec being used. In these cases the rule is removed, by default using a shift instead of a reduce (until 4.00.00 this was handled incorrectly by giving the reduction rule the highest precedence, using a reduce rather than a shift) The problem with rules without precedence was originally brought to my attention by Ramanand Mandayam. Different LHS elements of items: As pointed out by Ramanand Mandayam, S/R conflicts may be observed when reducible rules merely consist of non-terminals. Here is an example: %left '*' %token ID %% expr: term ; term: term '*' primary | ID ; primary: '-' expr | ID ; This grammar contains the following state State 2: 0: [P1 1] expr -> term . { } 1, () -1 1: [P2 1] term -> term . '*' primary { '*' } 0, () 0 0: On '*' to state 4 with (1 ) Reduce item(s): 0 Here, item 0 reduces to N 'expr' and item 1 requires a shift in a production rule of the N 'term'. In these cases the rule 'expr -> term .' has no precedence that can be derived from either %prec or an initial terminal. Such reductions automatically receive the highest possible precedence and 'reduce' is used, rather than 'shift'. Since there is no explicit basis for this choice the choice between shift and reduce is flagged as a conflict. bisonc++-4.13.01/TODO0000644000175000017500000000013612633316117012742 0ustar frankfrank- %nonassoc should probably produce an error when %nonassoc operators are used repeatedly. bisonc++-4.13.01/VERSION0000644000175000017500000000006612634777177013346 0ustar frankfrank#define VERSION "4.13.01" #define YEARS "2005-2015" bisonc++-4.13.01/VERSION.h0000644000175000017500000000010412633316117013543 0ustar frankfrank#include "VERSION" SUBST(_CurVers_)(VERSION) SUBST(_CurYrs_)(YEARS) bisonc++-4.13.01/atdollar/0000755000175000017500000000000012633316117014054 5ustar frankfrankbisonc++-4.13.01/atdollar/operatorinsert.cc0000644000175000017500000000101212633316117017435 0ustar frankfrank#include "atdollar.ih" std::ostream &operator<<(std::ostream &out, AtDollar const &atd) { out << "At line " << atd.d_lineNr << ", block pos. " << atd.d_pos << ", length: " << atd.d_length << ": " << (atd.d_type == AtDollar::AT ? '@' : '$'); if (atd.d_id.length()) out << '<' << atd.d_id << '>'; if (atd.d_nr == numeric_limits::max()) out << '$'; else out << atd.d_nr; if (atd.d_member) out << ". (member call)"; return out; } bisonc++-4.13.01/atdollar/atdollar1.cc0000644000175000017500000000057112633316117016251 0ustar frankfrank#include "atdollar.ih" // ${NR}, ${NR}. or @{NR} AtDollar::AtDollar(Type type, size_t blockPos, size_t lineNr, std::string const &text, int nr, bool member) : d_type(type), d_lineNr(lineNr), d_pos(blockPos), d_length(text.length() - member), d_text(text), d_nr(nr), d_member(member), d_stype(false) {} bisonc++-4.13.01/atdollar/atdollar.h0000644000175000017500000000453012633316117016031 0ustar frankfrank#ifndef INCLUDED_ATDOLLAR_ #define INCLUDED_ATDOLLAR_ #include #include #include class AtDollar { friend std::ostream &operator<<(std::ostream &out, AtDollar const &atd); public: enum Type { AT, DOLLAR }; enum Action { RETURN_VALUE, NUMBERED_ELEMENT, TYPED_RETURN_VALUE, TYPED_NUMBERED_ELEMENT }; private: Type d_type; size_t d_lineNr; size_t d_pos; size_t d_length; std::string d_text; std::string d_id; int d_nr; // $$ if numeric_limits::max() bool d_member; bool d_stype; public: AtDollar() = default; // only used by std::vector in Block // 1 $$, $$., $NR, $NR., @@ or @NR AtDollar(Type type, size_t blockPos, size_t lineNr, std::string const &text, int nr, bool member); // 3 $$ or $-?NR AtDollar(Type type, size_t blockPos, size_t lineNr, std::string const &text, std::string const &id, int nr); Type type() const; int nr() const; std::string const &text() const; std::string const &id() const; size_t pos() const; size_t length() const; size_t lineNr() const; Action action() const; bool callsMember() const; bool returnValue() const; // $$ is being referred to bool stype() const; // id == STYPE__ }; inline AtDollar::Type AtDollar::type() const { return d_type; } inline int AtDollar::nr() const { return d_nr; } inline bool AtDollar::callsMember() const { return d_member; } inline bool AtDollar::returnValue() const { return d_nr == std::numeric_limits::max(); } inline bool AtDollar::stype() const { return d_stype; } inline size_t AtDollar::pos() const { return d_pos; } inline size_t AtDollar::length() const { return d_length; } inline size_t AtDollar::lineNr() const { return d_lineNr; } inline std::string const &AtDollar::text() const { return d_text; } inline std::string const &AtDollar::id() const { return d_id; } #endif bisonc++-4.13.01/atdollar/atdollar2.cc0000644000175000017500000000061712633316117016253 0ustar frankfrank#include "atdollar.ih" // $$ or $-?NR AtDollar::AtDollar(Type type, size_t blockPos, size_t lineNr, string const &text, string const &id, int nr) : d_type(type), d_lineNr(lineNr), d_pos(blockPos), d_length(text.length()), d_text(text), d_id(id.empty() ? "STYPE__" : id), d_nr(nr), d_member(false), d_stype(d_id == "STYPE__") {} bisonc++-4.13.01/atdollar/frame0000644000175000017500000000004712633316117015072 0ustar frankfrank#include "atdollar.ih" AtDollar:: { } bisonc++-4.13.01/atdollar/action.cc0000644000175000017500000000040312633316117015635 0ustar frankfrank#include "atdollar.ih" AtDollar::Action AtDollar::action() const { if (d_nr == numeric_limits::max()) return d_id.length() ? TYPED_RETURN_VALUE : RETURN_VALUE; return d_id.length() ? TYPED_NUMBERED_ELEMENT : NUMBERED_ELEMENT; } bisonc++-4.13.01/atdollar/atdollar.ih0000644000175000017500000000010112633316117016170 0ustar frankfrank#include "atdollar.h" #include using namespace std; bisonc++-4.13.01/bisonc++.cc0000644000175000017500000001033112633316117014162 0ustar frankfrank/* bisonc++.cc */ #include "bisonc++.ih" using namespace std; using namespace FBB; namespace { Arg::LongOption longOptions[] = { {"analyze-only", 'A'}, // option only {"baseclass-header", 'b'}, {"baseclass-preinclude", 'H'}, {"baseclass-skeleton", 'B'}, // options only {"polymorphic-skeleton", 'M'}, {"polymorphic-inline-skeleton", 'm'}, {"class-header", 'c'}, {"class-name", Arg::Required}, {"class-skeleton", 'C'}, // option only Arg::LongOption("construction"), // option only // implies verbose, but also shows FIRST and FOLLOW sets as // well as the full set of states, including the non-kernel // items Arg::LongOption("debug"), Arg::LongOption("error-verbose"), {"filenames", 'f'}, Arg::LongOption("flex"), {"help", 'h'}, // option only {"implementation-header", 'i'}, {"implementation-skeleton", 'I'}, // option only Arg::LongOption("insert-stype"), // option only {"max-inclusion-depth", Arg::Required}, // option only {"namespace", 'n'}, // option only Arg::LongOption("no-baseclass-header"), Arg::LongOption("no-lines"), Arg::LongOption("no-parse-member"), // options only {"no-decoration", 'D'}, {"no-default-action-return", 'N'}, {"own-debug", Arg::None}, {"own-tokens", 'T'}, {"parsefun-skeleton", 'P'}, {"parsefun-source", 'p'}, {"print-tokens", 't'}, {"required-tokens", Arg::Required}, {"scanner", 's'}, Arg::LongOption("scanner-debug"), {"scanner-matched-text-function", Arg::Required}, {"scanner-token-function", Arg::Required}, {"scanner-class-name", Arg::Required}, Arg::LongOption("show-filenames"), // option only {"skeleton-directory", 'S'}, // option only {"target-directory", Arg::Required}, Arg::LongOption("thread-safe"), // options only {"usage", 'h'}, {"verbose", 'V'}, // shows rules, tokens, final states and kernel items, // and describes conflicts when found {"version", 'v'}, }; auto longEnd = longOptions + sizeof(longOptions) / sizeof(Arg::LongOption); } int main(int argc, char **argv) try { Arg &arg = Arg::initialize("AB:b:C:c:Df:H:hI:i:M:m:n:Np:P:s:S:tTVv", longOptions, longEnd, argc, argv); arg.versionHelp(usage, version, 1); Rules rules; Parser parser(rules); // Prepare parsing. If `include-only' was // specified, processing stops here. parser.parse(); // parses the input, fills the data in the Rules // read the grammar file, build required data // structures. parser.cleanup(); // do cleanup actions following parse() // (terminate if parsing produced errors) rules.updatePrecedences(); // update production rule precedences rules.showRules(); rules.showTerminals(); rules.determineFirst(); rules.showFirst(); // define the startproduction Production::setStart(rules.startProduction()); State::define(rules); // define all states rules.assignNonTerminalNumbers(); rules.showUnusedTerminals(); rules.showUnusedNonTerminals(); rules.showUnusedRules(); State::allStates(); Grammar grammar; grammar.deriveSentence(); if (emsg.count()) return 1; if (arg.option('A')) // Analyze only return 0; Generator generator(rules, parser.polymorphic()); if (generator.conflicts()) return 1; generator.baseClassHeader(); generator.classHeader(); generator.implementationHeader(); generator.parseFunction(); } catch(exception const &err) { cerr << err.what() << '\n'; return 1; } catch(int x) { return Arg::instance().option("hv") ? 0 : x; } bisonc++-4.13.01/bisonc++.ih0000644000175000017500000000075612633316117014207 0ustar frankfrank#ifndef _INCLUDED_BISONCPP_H_ #define _INCLUDED_BISONCPP_H_ #include #include #include #include #include "rules/rules.h" #include "parser/parser.h" #include "state/state.h" #include "srconflict/srconflict.h" #include "rrconflict/rrconflict.h" #include "grammar/grammar.h" #include "generator/generator.h" using namespace std; using namespace FBB; extern char version[]; extern char year[]; void usage(string const &program_name); #endif bisonc++-4.13.01/bisonc++.xref0000644000175000017500000027672512633316117014566 0ustar frankfrankoxref by Frank B. Brokken (f.b.brokken@rug.nl) oxref V1.00.03 2012-2015 CREATED Sun, 13 Dec 2015 15:34:33 +0000 CROSS REFERENCE FOR: -fxs tmp/libmodules.a ---------------------------------------------------------------------- actionCases(std::ostream&) const Full name: Generator::actionCases(std::ostream&) const Source: actioncases.cc Used By: data.cc: GLOBALS data.cc 28data.o addElement(Symbol*) Full name: Rules::addElement(Symbol*) Source: addelement.cc Used By: augmentgrammar.cc: Rules::augmentGrammar(Symbol*) handleproductionelement.cc: Parser::handleProductionElement(Meta__::SType&) handleproductionelements.cc: Parser::handleProductionElements(Meta__::SType&, Meta__::SType const&) nestedblock.cc: Parser::nestedBlock(Block&) addIncludeQuotes(std::__cxx11::basic_string, std::allocator >&) Full name: Options::addIncludeQuotes(std::__cxx11::basic_string, std::allocator >&) Source: addincludequotes.cc Used By: setquotedstrings.cc: Options::setQuotedStrings() addKernelItem(StateItem const&) Full name: State::addKernelItem(StateItem const&) Source: addkernelitem.cc Used By: addstate.cc: State::addState(std::vector > const&) initialstate.cc: State::initialState() addNext(Symbol const*, unsigned int) Full name: State::addNext(Symbol const*, unsigned int) Source: addnext.cc Used By: notreducible.cc: State::notReducible(unsigned int) addPolymorphic(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) Full name: Parser::addPolymorphic(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) Source: addpolymorphic.cc Used By: parse.cc: Parser::executeAction(int) addProduction(unsigned int) Full name: Rules::addProduction(unsigned int) Source: addproduction.cc Used By: augmentgrammar.cc: Rules::augmentGrammar(Symbol*) openrule.cc: Parser::openRule(std::__cxx11::basic_string, std::allocator > const&) parse.cc: Parser::executeAction(int) addProductions(Symbol const*, unsigned int) Full name: State::addProductions(Symbol const*, unsigned int) Source: addproductions.cc Used By: addnext.cc: State::addNext(Symbol const*, unsigned int) addState(std::vector > const&) Full name: State::addState(std::vector > const&) Source: addstate.cc Used By: nextstate.cc: State::nextState(Next&) addToKernel(std::vector >&, Symbol const*, unsigned int) Full name: Next::addToKernel(std::vector >&, Symbol const*, unsigned int) Source: addtokernel.cc Used By: notreducible.cc: State::notReducible(unsigned int) assign(std::__cxx11::basic_string, std::allocator >*, Options::PathType, char const*) Full name: Options::assign(std::__cxx11::basic_string, std::allocator >*, Options::PathType, char const*) Source: assign.cc Used By: parse.cc: Parser::executeAction(int) AtDollar(AtDollar::Type, unsigned int, unsigned int, std::__cxx11::basic_string, std::allocator > const&, int, bool) Full name: AtDollar::AtDollar(AtDollar::Type, unsigned int, unsigned int, std::__cxx11::basic_string, std::allocator > const&, int, bool) Source: atdollar1.cc Used By: atindex.cc: Block::atIndex(unsigned int, std::__cxx11::basic_string, std::allocator > const&) dollar.cc: Block::dollar(unsigned int, std::__cxx11::basic_string, std::allocator > const&, bool) dollarindex.cc: Block::dollarIndex(unsigned int, std::__cxx11::basic_string, std::allocator > const&, bool) AtDollar(AtDollar::Type, unsigned int, unsigned int, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, int) Full name: AtDollar::AtDollar(AtDollar::Type, unsigned int, unsigned int, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, int) Source: atdollar2.cc Used By: iddollar.cc: Block::IDdollar(unsigned int, std::__cxx11::basic_string, std::allocator > const&) idindex.cc: Block::IDindex(unsigned int, std::__cxx11::basic_string, std::allocator > const&) atElse(bool&) const Full name: Generator::atElse(bool&) const Source: atelse.cc Used By: data.cc: GLOBALS data.cc 28data.o atEnd(bool&) const Full name: Generator::atEnd(bool&) const Source: atend.cc Used By: data.cc: GLOBALS data.cc 28data.o atIndex(unsigned int, std::__cxx11::basic_string, std::allocator > const&) Full name: Block::atIndex(unsigned int, std::__cxx11::basic_string, std::allocator > const&) Source: atindex.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) augmentGrammar(Symbol*) Full name: Rules::augmentGrammar(Symbol*) Source: augmentgrammar.cc Used By: cleanup.cc: Parser::cleanup() baseClass(std::ostream&) const Full name: Generator::baseClass(std::ostream&) const Source: baseclass.cc Used By: data.cc: GLOBALS data.cc 28data.o becomesDerivable(Production const*) Full name: Grammar::becomesDerivable(Production const*) Source: becomesderivable.cc Used By: derivable.cc: Grammar::derivable(Symbol const*) beyondDotIsNonTerminal() const Full name: Item::beyondDotIsNonTerminal() const Source: beyonddotisnonterminal.cc Used By: distributelasetof.cc: State::distributeLAsetOf(StateItem&) bolAt(std::ostream&, std::__cxx11::basic_string, std::allocator >&, std::istream&, bool&) const Full name: Generator::bolAt(std::ostream&, std::__cxx11::basic_string, std::allocator >&, std::istream&, bool&) const Source: bolat.cc Used By: insert2.cc: Generator::insert(std::ostream&, unsigned int, char const*) const buildKernel(std::vector >*, std::vector > const&) Full name: Next::buildKernel(std::vector >*, std::vector > const&) Source: buildkernel.cc Used By: nextstate.cc: State::nextState(Next&) checkConflicts() Full name: State::checkConflicts() Source: checkconflicts.cc Used By: define.cc: State::define(Rules const&) checkEmptyBlocktype() Full name: Parser::checkEmptyBlocktype() Source: checkemptyblocktype.cc Used By: parse.cc: Parser::executeAction(int) checkEndOfRawString() Full name: Scanner::checkEndOfRawString() Source: checkendofrawstring.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) checkFirstType() Full name: Parser::checkFirstType() Source: checkfirsttype.cc Used By: handleproductionelement.cc: Parser::handleProductionElement(Meta__::SType&) installaction.cc: Parser::installAction(Block&) checkRemoved(std::ostream&) const Full name: Next::checkRemoved(std::ostream&) const Source: checkremoved.cc Used By: transition.cc: Next::transition(std::ostream&) const transitionkernel.cc: Next::transitionKernel(std::ostream&) const checkZeroNumber() Full name: Scanner::checkZeroNumber() Source: checkzeronumber.cc Used By: hexadecimal.cc: Scanner::hexadecimal() octal.cc: Scanner::octal() classH(std::ostream&) const Full name: Generator::classH(std::ostream&) const Source: classh.cc Used By: data.cc: GLOBALS data.cc 28data.o classIH(std::ostream&) const Full name: Generator::classIH(std::ostream&) const Source: classih.cc Used By: data.cc: GLOBALS data.cc 28data.o cleanDir(std::__cxx11::basic_string, std::allocator >&, bool) Full name: Options::cleanDir(std::__cxx11::basic_string, std::allocator >&, bool) Source: cleandir.cc Used By: setbasicstrings.cc: Options::setBasicStrings() clear() Full name: Block::clear() Source: clear.cc Used By: expectrules.cc: Parser::expectRules() open.cc: Block::open(unsigned int, std::__cxx11::basic_string, std::allocator > const&) close() Full name: Block::close() Source: close.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) comparePrecedence(Symbol const*, Symbol const*) Full name: Terminal::comparePrecedence(Symbol const*, Symbol const*) Source: compareprecedence.cc Used By: comparereductions.cc: RRConflict::compareReductions(unsigned int) solvebyprecedence.cc: Next::solveByPrecedence(Symbol const*) const compareReductions(unsigned int) Full name: RRConflict::compareReductions(unsigned int) Source: comparereductions.cc Used By: visitreduction.cc: RRConflict::visitReduction(unsigned int) computeLAsets() Full name: State::computeLAsets() Source: computelasets.cc Used By: determinelasets.cc: State::determineLAsets() construct() Full name: State::construct() Source: construct.cc Used By: define.cc: State::define(Rules const&) containsKernelItem(Item const&, unsigned int, std::vector > const&) Full name: StateItem::containsKernelItem(Item const&, unsigned int, std::vector > const&) Source: containskernelitem.cc Used By: haskernel.cc: State::hasKernel(std::vector > const&) const cxx11] Full name: ScannerBase::s_out__[abi:cxx11] Source: lex.cc Used By: checkendofrawstring.cc: Scanner::checkEndOfRawString() eoln.cc: Scanner::eoln() handlerawstring.cc: Scanner::rawString() handlexstring.cc: Scanner::handleXstring(unsigned int) returnquoted.cc: Scanner::returnQuoted(void (Scanner::*)()) returntypespec.cc: Scanner::returnTypeSpec() parse.cc: Parser::executeAction(int) cxx11] Full name: Parser::s_hiddenName[abi:cxx11] Source: data.cc Used By: nexthiddenname.cc: Parser::nextHiddenName[abi:cxx11]() cxx11] Full name: Production::s_fileName[abi:cxx11] Source: data.cc Used By: showconflicts.cc: RRConflict::showConflicts(Rules const&) const showconflicts.cc: SRConflict::showConflicts(Rules const&) const production1.cc: Production::Production(Symbol const*, unsigned int) storeFilename.cc: Production::storeFilename(std::__cxx11::basic_string, std::allocator > const&) cxx11] Full name: Generator::s_insert[abi:cxx11] Source: data.cc Used By: insert.cc: Generator::insert(std::ostream&) const cxx11]() Full name: Scanner::canonicalQuote[abi:cxx11]() Source: canonicalquote.cc Used By: parse.cc: Parser::executeAction(int) setprecedence.cc: Parser::setPrecedence(int) useterminal.cc: Parser::useTerminal() cxx11]() Full name: Parser::nextHiddenName[abi:cxx11]() Source: nexthiddenname.cc Used By: nestedblock.cc: Parser::nestedBlock(Block&) cxx11]() const Full name: Options::baseclassHeaderName[abi:cxx11]() const Source: baseclassheadername.cc Used By: conflicts.cc: Generator::conflicts() const cxx11]() const Full name: Generator::atTokenFunction[abi:cxx11]() const Source: attokenfunction.cc Used By: data.cc: GLOBALS data.cc 28data.o cxx11]() const Full name: Generator::atNameSpacedClassname[abi:cxx11]() const Source: atnamespacedclassname.cc Used By: data.cc: GLOBALS data.cc 28data.o cxx11]() const Full name: Generator::atClassname[abi:cxx11]() const Source: atclassname.cc Used By: data.cc: GLOBALS data.cc 28data.o cxx11]() const Full name: Generator::atMatchedTextFunction[abi:cxx11]() const Source: atmatchedtextfunction.cc Used By: data.cc: GLOBALS data.cc 28data.o cxx11]() const Full name: Generator::atLtype[abi:cxx11]() const Source: atltype.cc Used By: data.cc: GLOBALS data.cc 28data.o cxx11](AtDollar const&) const Full name: Parser::returnPolymorphic[abi:cxx11](AtDollar const&) const Source: returnpolymorphic.cc Used By: handledollar.cc: Parser::handleDollar(Block&, AtDollar const&, int) cxx11](AtDollar const&) const Full name: Parser::returnUnion[abi:cxx11](AtDollar const&) const Source: returnunion.cc Used By: handledollar.cc: Parser::handleDollar(Block&, AtDollar const&, int) cxx11](Options::PathType, char const*) Full name: Options::accept[abi:cxx11](Options::PathType, char const*) Source: accept.cc Used By: assign.cc: Options::assign(std::__cxx11::basic_string, std::allocator >*, Options::PathType, char const*) cxx11](unsigned int) const Full name: Rules::sType[abi:cxx11](unsigned int) const Source: stype.cc Used By: checkfirsttype.cc: Parser::checkFirstType() returnpolymorphic.cc: Parser::returnPolymorphic[abi:cxx11](AtDollar const&) const returnunion.cc: Parser::returnUnion[abi:cxx11](AtDollar const&) const semtag.cc: Parser::semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const debug(std::ostream&) const Full name: Generator::debug(std::ostream&) const Source: debug.cc Used By: data.cc: GLOBALS data.cc 28data.o debugDecl(std::ostream&) const Full name: Generator::debugDecl(std::ostream&) const Source: debugdecl.cc Used By: data.cc: GLOBALS data.cc 28data.o debugFunctions(std::ostream&) const Full name: Generator::debugFunctions(std::ostream&) const Source: debugfunctions.cc Used By: data.cc: GLOBALS data.cc 28data.o debugIncludes(std::ostream&) const Full name: Generator::debugIncludes(std::ostream&) const Source: debugincludes.cc Used By: data.cc: GLOBALS data.cc 28data.o debugInit(std::ostream&) const Full name: Generator::debugInit(std::ostream&) const Source: debuginit.cc Used By: data.cc: GLOBALS data.cc 28data.o debugLookup(std::ostream&) const Full name: Generator::debugLookup(std::ostream&) const Source: debuglookup.cc Used By: data.cc: GLOBALS data.cc 28data.o defaultActionReturn(std::ostream&) const Full name: Generator::defaultActionReturn(std::ostream&) const Source: defaultactionreturn.cc Used By: data.cc: GLOBALS data.cc 28data.o defineNonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) Full name: Parser::defineNonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) Source: definenonterminal.cc Used By: nestedblock.cc: Parser::nestedBlock(Block&) defineTerminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) Full name: Parser::defineTerminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) Source: defineterminal.cc Used By: definetokenname.cc: Parser::defineTokenName(std::__cxx11::basic_string, std::allocator > const&, bool) parse.cc: Parser::executeAction(int) defineTokenName(std::__cxx11::basic_string, std::allocator > const&, bool) Full name: Parser::defineTokenName(std::__cxx11::basic_string, std::allocator > const&, bool) Source: definetokenname.cc Used By: parse.cc: Parser::executeAction(int) derivable(Symbol const*) Full name: Grammar::derivable(Symbol const*) Source: derivable.cc Used By: becomesderivable.cc: Grammar::becomesDerivable(Production const*) derivesentence.cc: Grammar::deriveSentence() determineLAsets() Full name: State::determineLAsets() Source: determinelasets.cc Used By: define.cc: State::define(Rules const&) dflush__(std::ostream&) Full name: ScannerBase::dflush__(std::ostream&) Source: lex.cc Used By: checkendofrawstring.cc: Scanner::checkEndOfRawString() eoln.cc: Scanner::eoln() handlerawstring.cc: Scanner::rawString() handlexstring.cc: Scanner::handleXstring(unsigned int) returnquoted.cc: Scanner::returnQuoted(void (Scanner::*)()) returntypespec.cc: Scanner::returnTypeSpec() parse.cc: Parser::executeAction(int) distributeLAsetOf(StateItem&) Full name: State::distributeLAsetOf(StateItem&) Source: distributelasetof.cc Used By: computelasets.cc: State::computeLAsets() dollar(unsigned int, std::__cxx11::basic_string, std::allocator > const&, bool) Full name: Block::dollar(unsigned int, std::__cxx11::basic_string, std::allocator > const&, bool) Source: dollar.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) dollarIndex(unsigned int, std::__cxx11::basic_string, std::allocator > const&, bool) Full name: Block::dollarIndex(unsigned int, std::__cxx11::basic_string, std::allocator > const&, bool) Source: dollarindex.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) enlargeLA(LookaheadSet const&) Full name: StateItem::enlargeLA(LookaheadSet const&) Source: enlargela.cc Used By: distributelasetof.cc: State::distributeLAsetOf(StateItem&) inspecttransitions.cc: State::inspectTransitions(std::set, std::allocator >&) eoln() Full name: Scanner::eoln() Source: eoln.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) errExisting(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) const Full name: Generator::errExisting(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) const Source: errexisting.cc Used By: conflicts.cc: Generator::conflicts() const errIndexTooLarge(AtDollar const&, int) const Full name: Parser::errIndexTooLarge(AtDollar const&, int) const Source: errindextoolarge.cc Used By: handleatsign.cc: Parser::handleAtSign(Block&, AtDollar const&, int) handledollar.cc: Parser::handleDollar(Block&, AtDollar const&, int) errNoSemantic(char const*, AtDollar const&, std::__cxx11::basic_string, std::allocator > const&) const Full name: Parser::errNoSemantic(char const*, AtDollar const&, std::__cxx11::basic_string, std::allocator > const&) const Source: errnosemantic.cc Used By: semtag.cc: Parser::semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const error(char const*) Full name: Parser::error(char const*) Source: error.cc Used By: parse.cc: Parser::errorRecovery() errorVerbose(std::ostream&) const Full name: Generator::errorVerbose(std::ostream&) const Source: errorverbose.cc Used By: data.cc: GLOBALS data.cc 28data.o escape() Full name: Scanner::escape() Source: escape.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) expectRules() Full name: Parser::expectRules() Source: expectrules.cc Used By: parse.cc: Parser::executeAction(int) filename(std::__cxx11::basic_string, std::allocator > const&) Full name: Generator::filename(std::__cxx11::basic_string, std::allocator > const&) Source: filename.cc Used By: baseclass.cc: Generator::baseClass(std::ostream&) const classh.cc: Generator::classH(std::ostream&) const classih.cc: Generator::classIH(std::ostream&) const filter(std::istream&, std::ostream&, bool) const Full name: Generator::filter(std::istream&, std::ostream&, bool) const Source: filter.cc Used By: baseclassheader.cc: Generator::baseClassHeader() const classheader.cc: Generator::classHeader() const implementationheader.cc: Generator::implementationHeader() const parsefunction.cc: Generator::parseFunction() const polymorphic.cc: Generator::polymorphic(std::ostream&) const polymorphicinline.cc: Generator::polymorphicInline(std::ostream&) const findKernel(std::vector > const&) const Full name: State::findKernel(std::vector > const&) const Source: findkernel.cc Used By: nextstate.cc: State::nextState(Next&) firstBeyondDot(FirstSet*) const Full name: Item::firstBeyondDot(FirstSet*) const Source: firstbeyonddot.cc Used By: distributelasetof.cc: State::distributeLAsetOf(StateItem&) FirstSet(Element const*) Full name: FirstSet::FirstSet(Element const*) Source: firstset1.cc Used By: terminal1.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, unsigned int, Terminal::Association, std::__cxx11::basic_string, std::allocator > const&) terminal2.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) grep(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) const Full name: Generator::grep(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) const Source: grep.cc Used By: errexisting.cc: Generator::errExisting(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) const handleAtSign(Block&, AtDollar const&, int) Full name: Parser::handleAtSign(Block&, AtDollar const&, int) Source: handleatsign.cc Used By: substituteblock.cc: Parser::substituteBlock(int, Block&) handleDollar(Block&, AtDollar const&, int) Full name: Parser::handleDollar(Block&, AtDollar const&, int) Source: handledollar.cc Used By: substituteblock.cc: Parser::substituteBlock(int, Block&) handleProductionElement(Meta__::SType&) Full name: Parser::handleProductionElement(Meta__::SType&) Source: handleproductionelement.cc Used By: parse.cc: Parser::executeAction(int) handleProductionElements(Meta__::SType&, Meta__::SType const&) Full name: Parser::handleProductionElements(Meta__::SType&, Meta__::SType const&) Source: handleproductionelements.cc Used By: parse.cc: Parser::executeAction(int) handleSRconflict(unsigned int, __gnu_cxx::__normal_iterator > > const&, unsigned int) Full name: SRConflict::handleSRconflict(unsigned int, __gnu_cxx::__normal_iterator > > const&, unsigned int) Source: handlesrconflict.cc Used By: processshiftreduceconflict.cc: SRConflict::processShiftReduceConflict(__gnu_cxx::__normal_iterator > > const&, unsigned int) handleXstring(unsigned int) Full name: Scanner::handleXstring(unsigned int) Source: handlexstring.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) hasKernel(std::vector > const&) const Full name: State::hasKernel(std::vector > const&) const Source: haskernel.cc Used By: findkernel.cc: State::findKernel(std::vector > const&) const hexadecimal() Full name: Scanner::hexadecimal() Source: hexadecimal.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) IDdollar(unsigned int, std::__cxx11::basic_string, std::allocator > const&) Full name: Block::IDdollar(unsigned int, std::__cxx11::basic_string, std::allocator > const&) Source: iddollar.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) IDindex(unsigned int, std::__cxx11::basic_string, std::allocator > const&) Full name: Block::IDindex(unsigned int, std::__cxx11::basic_string, std::allocator > const&) Source: idindex.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) ifInsertStype(bool&) const Full name: Generator::ifInsertStype(bool&) const Source: ifinsertstype.cc Used By: data.cc: GLOBALS data.cc 28data.o ifLtype(bool&) const Full name: Generator::ifLtype(bool&) const Source: ifltype.cc Used By: data.cc: GLOBALS data.cc 28data.o ifPrintTokens(bool&) const Full name: Generator::ifPrintTokens(bool&) const Source: ifprinttokens.cc Used By: data.cc: GLOBALS data.cc 28data.o ifThreadSafe(bool&) const Full name: Generator::ifThreadSafe(bool&) const Source: ifthreadsafe.cc Used By: data.cc: GLOBALS data.cc 28data.o indexToOffset(int, int) const Full name: Parser::indexToOffset(int, int) const Source: indextooffset.cc Used By: handleatsign.cc: Parser::handleAtSign(Block&, AtDollar const&, int) handledollar.cc: Parser::handleDollar(Block&, AtDollar const&, int) substituteblock.cc: Parser::substituteBlock(int, Block&) initialState() Full name: State::initialState() Source: initialstate.cc Used By: define.cc: State::define(Rules const&) insert(NonTerminal*) Full name: Rules::insert(NonTerminal*) Source: insert2.cc Used By: augmentgrammar.cc: Rules::augmentGrammar(Symbol*) definenonterminal.cc: Parser::defineNonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) requirenonterminal.cc: Parser::requireNonTerminal(std::__cxx11::basic_string, std::allocator > const&) usesymbol.cc: Parser::useSymbol() insert(std::ostream&) const Full name: SRConflict::insert(std::ostream&) const Source: insert.cc Used By: insertext.cc: State::insertExt(std::ostream&) const insertstd.cc: State::insertStd(std::ostream&) const insert(std::ostream&) const Full name: LookaheadSet::insert(std::ostream&) const Source: insert.cc Used By: operatorinsert.cc: operator<<(std::ostream&, LookaheadSet const&) insert(std::ostream&) const Full name: FirstSet::insert(std::ostream&) const Source: oinsert.cc Used By: showfirst.cc: GLOBALS showfirst.cc 12showfirst.o insertext.cc: GLOBALS insertext.cc 25insertext.o insert(std::ostream&) const Full name: RRConflict::insert(std::ostream&) const Source: insert.cc Used By: insertext.cc: State::insertExt(std::ostream&) const insertstd.cc: State::insertStd(std::ostream&) const insert(std::ostream&) const Full name: Generator::insert(std::ostream&) const Source: insert.cc Used By: filter.cc: Generator::filter(std::istream&, std::ostream&, bool) const insert(std::ostream&) const Full name: NonTerminal::insert(std::ostream&) const Source: v.cc Used By: destructor.cc: NonTerminal::~NonTerminal() insert(std::ostream&, Production const*) const Full name: Item::insert(std::ostream&, Production const*) const Source: insert.cc Used By: plainitem.cc: Item::plainItem(std::ostream&) const pnrdotitem.cc: Item::pNrDotItem(std::ostream&) const insert(std::ostream&, unsigned int, char const*) const Full name: Generator::insert(std::ostream&, unsigned int, char const*) const Source: insert2.cc Used By: debugdecl.cc: Generator::debugDecl(std::ostream&) const debugfunctions.cc: Generator::debugFunctions(std::ostream&) const debugincludes.cc: Generator::debugIncludes(std::ostream&) const debuglookup.cc: Generator::debugLookup(std::ostream&) const lex.cc: Generator::lex(std::ostream&) const ltype.cc: Generator::ltype(std::ostream&) const ltypedata.cc: Generator::ltypeData(std::ostream&) const print.cc: Generator::print(std::ostream&) const threading.cc: Generator::threading(std::ostream&) const insert(std::vector > const&) const Full name: Writer::insert(std::vector > const&) const Source: insert.cc Used By: tokens.cc: Generator::tokens(std::ostream&) const insert(Terminal*, std::__cxx11::basic_string, std::allocator > const&) Full name: Rules::insert(Terminal*, std::__cxx11::basic_string, std::allocator > const&) Source: insert1.cc Used By: defineterminal.cc: Parser::defineTerminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) predefine.cc: Parser::predefine(Terminal const*) useterminal.cc: Parser::useTerminal() insertAction(Production const*, std::ostream&, bool, unsigned int) Full name: Production::insertAction(Production const*, std::ostream&, bool, unsigned int) Source: insertaction.cc Used By: actioncases.cc: Generator::actionCases(std::ostream&) const insertExt(std::ostream&) const Full name: State::insertExt(std::ostream&) const Source: insertext.cc Used By: allstates.cc: State::allStates() define.cc: State::define(Rules const&) insertStd(std::ostream&) const Full name: State::insertStd(std::ostream&) const Source: insertstd.cc Used By: define.cc: State::define(Rules const&) insertToken(Terminal const*, unsigned int&, std::ostream&) Full name: Writer::insertToken(Terminal const*, unsigned int&, std::ostream&) Source: inserttoken.cc Used By: insert.cc: Writer::insert(std::vector > const&) const insName(std::ostream&) const Full name: NonTerminal::insName(std::ostream&) const Source: insname.cc Used By: showfirst.cc: GLOBALS showfirst.cc 12showfirst.o insertext.cc: GLOBALS insertext.cc 25insertext.o inspect() Full name: RRConflict::inspect() Source: inspect.cc Used By: checkconflicts.cc: State::checkConflicts() inspect() Full name: SRConflict::inspect() Source: inspect.cc Used By: checkconflicts.cc: State::checkConflicts() inspectTransitions(std::set, std::allocator >&) Full name: State::inspectTransitions(std::set, std::allocator >&) Source: inspecttransitions.cc Used By: determinelasets.cc: State::determineLAsets() installAction(Block&) Full name: Parser::installAction(Block&) Source: installaction.cc Used By: handleproductionelement.cc: Parser::handleProductionElement(Meta__::SType&) instance() Full name: Options::instance() Source: instance.cc Used By: generator1.cc: Generator::Generator(Rules const&, std::unordered_map, std::allocator >, std::__cxx11::basic_string, std::allocator >, std::hash, std::allocator > >, std::equal_to, std::allocator > >, std::allocator, std::allocator > const, std::__cxx11::basic_string, std::allocator > > > > const&) parser1.cc: Parser::Parser(Rules&) intersection(LookaheadSet const&) const Full name: LookaheadSet::intersection(LookaheadSet const&) const Source: intersection.cc Used By: comparereductions.cc: RRConflict::compareReductions(unsigned int) isDerivable(Production const*) Full name: Grammar::isDerivable(Production const*) Source: isderivable.cc Used By: derivable.cc: Grammar::derivable(Symbol const*) isFirstStypeDefinition() const Full name: Options::isFirstStypeDefinition() const Source: isfirststypedef.cc Used By: setpolymorphicdecl.cc: Options::setPolymorphicDecl() setstype.cc: Options::setStype() setuniondecl.cc: Options::setUnionDecl(std::__cxx11::basic_string, std::allocator > const&) Item() Full name: Item::Item() Source: item0.cc Used By: stateitem1.cc: StateItem::StateItem() Item(Item const*, unsigned int) Full name: Item::Item(Item const*, unsigned int) Source: item2.cc Used By: buildkernel.cc: Next::buildKernel(std::vector >*, std::vector > const&) Item(Production const*) Full name: Item::Item(Production const*) Source: item1.cc Used By: addproductions.cc: State::addProductions(Symbol const*, unsigned int) initialstate.cc: State::initialState() itemContext(std::ostream&) const Full name: StateItem::itemContext(std::ostream&) const Source: itemcontext.cc Used By: insertext.cc: State::insertExt(std::ostream&) const key(std::ostream&) const Full name: Generator::key(std::ostream&) const Source: key.cc Used By: actioncases.cc: Generator::actionCases(std::ostream&) const baseclass.cc: Generator::baseClass(std::ostream&) const classh.cc: Generator::classH(std::ostream&) const classih.cc: Generator::classIH(std::ostream&) const debug.cc: Generator::debug(std::ostream&) const debugdecl.cc: Generator::debugDecl(std::ostream&) const debugfunctions.cc: Generator::debugFunctions(std::ostream&) const debugincludes.cc: Generator::debugIncludes(std::ostream&) const debuginit.cc: Generator::debugInit(std::ostream&) const debuglookup.cc: Generator::debugLookup(std::ostream&) const defaultactionreturn.cc: Generator::defaultActionReturn(std::ostream&) const errorverbose.cc: Generator::errorVerbose(std::ostream&) const lex.cc: Generator::lex(std::ostream&) const ltype.cc: Generator::ltype(std::ostream&) const ltypedata.cc: Generator::ltypeData(std::ostream&) const ltypepop.cc: Generator::ltypePop(std::ostream&) const ltypepush.cc: Generator::ltypePush(std::ostream&) const ltyperesize.cc: Generator::ltypeResize(std::ostream&) const ltypestack.cc: Generator::ltypeStack(std::ostream&) const namespaceclose.cc: Generator::namespaceClose(std::ostream&) const namespaceopen.cc: Generator::namespaceOpen(std::ostream&) const namespaceuse.cc: Generator::namespaceUse(std::ostream&) const polymorphic.cc: Generator::polymorphic(std::ostream&) const polymorphicinline.cc: Generator::polymorphicInline(std::ostream&) const polymorphicspecializations.cc: Generator::polymorphicSpecializations(std::ostream&) const preincludes.cc: Generator::preIncludes(std::ostream&) const print.cc: Generator::print(std::ostream&) const requiredtokens.cc: Generator::requiredTokens(std::ostream&) const scannerh.cc: Generator::scannerH(std::ostream&) const scannerobject.cc: Generator::scannerObject(std::ostream&) const staticdata.cc: Generator::staticData(std::ostream&) const stype.cc: Generator::stype(std::ostream&) const threading.cc: Generator::threading(std::ostream&) const tokens.cc: Generator::tokens(std::ostream&) const lex(std::ostream&) const Full name: Generator::lex(std::ostream&) const Source: lex.cc Used By: data.cc: GLOBALS data.cc 28data.o lex__() Full name: Scanner::lex__() Source: lex.cc Used By: parse.cc: Parser::nextToken() LookaheadSet(LookaheadSet const&) Full name: LookaheadSet::LookaheadSet(LookaheadSet const&) Source: lookaheadset3.cc Used By: rrdata1.cc: RRData::RRData(LookaheadSet) comparereductions.cc: GLOBALS comparereductions.cc 20comparereductions.o visitreduction.cc: SRConflict::visitReduction(unsigned int) addkernelitem.cc: GLOBALS addkernelitem.cc 25addkernelitem.o addproductions.cc: GLOBALS addproductions.cc 25addproductions.o LookaheadSet(LookaheadSet::EndStatus) Full name: LookaheadSet::LookaheadSet(LookaheadSet::EndStatus) Source: lookaheadset1.cc Used By: stateitem1.cc: StateItem::StateItem() stateitem2.cc: StateItem::StateItem(Item const&) distributelasetof.cc: State::distributeLAsetOf(StateItem&) initialstate.cc: State::initialState() lookup(std::__cxx11::basic_string, std::allocator > const&) Full name: Symtab::lookup(std::__cxx11::basic_string, std::allocator > const&) Source: lookup.cc Used By: cleanup.cc: Parser::cleanup() definenonterminal.cc: Parser::defineNonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) defineterminal.cc: Parser::defineTerminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) requirenonterminal.cc: Parser::requireNonTerminal(std::__cxx11::basic_string, std::allocator > const&) setprecedence.cc: Parser::setPrecedence(int) usesymbol.cc: Parser::useSymbol() useterminal.cc: Parser::useTerminal() ltype(std::ostream&) const Full name: Generator::ltype(std::ostream&) const Source: ltype.cc Used By: data.cc: GLOBALS data.cc 28data.o ltypeData(std::ostream&) const Full name: Generator::ltypeData(std::ostream&) const Source: ltypedata.cc Used By: data.cc: GLOBALS data.cc 28data.o ltypePop(std::ostream&) const Full name: Generator::ltypePop(std::ostream&) const Source: ltypepop.cc Used By: data.cc: GLOBALS data.cc 28data.o ltypePush(std::ostream&) const Full name: Generator::ltypePush(std::ostream&) const Source: ltypepush.cc Used By: data.cc: GLOBALS data.cc 28data.o ltypeResize(std::ostream&) const Full name: Generator::ltypeResize(std::ostream&) const Source: ltyperesize.cc Used By: data.cc: GLOBALS data.cc 28data.o ltypeStack(std::ostream&) const Full name: Generator::ltypeStack(std::ostream&) const Source: ltypestack.cc Used By: data.cc: GLOBALS data.cc 28data.o multiCharQuote() Full name: Scanner::multiCharQuote() Source: multicharquote.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) multiplyDefined(Symbol const*) Full name: Parser::multiplyDefined(Symbol const*) Source: multiplydefined.cc Used By: definenonterminal.cc: Parser::defineNonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) defineterminal.cc: Parser::defineTerminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) requirenonterminal.cc: Parser::requireNonTerminal(std::__cxx11::basic_string, std::allocator > const&) useterminal.cc: Parser::useTerminal() nameOrValue(std::ostream&) const Full name: Terminal::nameOrValue(std::ostream&) const Source: nameorvalue.cc Used By: reductionsymbol.cc: Writer::reductionSymbol(Element const*, unsigned int, FBB::Table&) transition.cc: Writer::transition(Next const&, FBB::Table&) namespaceClose(std::ostream&) const Full name: Generator::namespaceClose(std::ostream&) const Source: namespaceclose.cc Used By: data.cc: GLOBALS data.cc 28data.o namespaceOpen(std::ostream&) const Full name: Generator::namespaceOpen(std::ostream&) const Source: namespaceopen.cc Used By: data.cc: GLOBALS data.cc 28data.o namespaceUse(std::ostream&) const Full name: Generator::namespaceUse(std::ostream&) const Source: namespaceuse.cc Used By: data.cc: GLOBALS data.cc 28data.o negativeIndex(AtDollar const&) const Full name: Parser::negativeIndex(AtDollar const&) const Source: negativeindex.cc Used By: semtag.cc: Parser::semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const warnautoignored.cc: Parser::warnAutoIgnored(char const*, AtDollar const&) const nestedBlock(Block&) Full name: Parser::nestedBlock(Block&) Source: nestedblock.cc Used By: handleproductionelements.cc: Parser::handleProductionElements(Meta__::SType&, Meta__::SType const&) newRule(NonTerminal*, std::__cxx11::basic_string, std::allocator > const&, unsigned int) Full name: Rules::newRule(NonTerminal*, std::__cxx11::basic_string, std::allocator > const&, unsigned int) Source: newrule.cc Used By: augmentgrammar.cc: Rules::augmentGrammar(Symbol*) openrule.cc: Parser::openRule(std::__cxx11::basic_string, std::allocator > const&) newState() Full name: State::newState() Source: newstate.cc Used By: addstate.cc: State::addState(std::vector > const&) initialstate.cc: State::initialState() Next(Symbol const*, unsigned int) Full name: Next::Next(Symbol const*, unsigned int) Source: next2.cc Used By: addnext.cc: State::addNext(Symbol const*, unsigned int) nextFind(Symbol const*) const Full name: State::nextFind(Symbol const*) const Source: nextfindfrom.cc Used By: nexton.cc: State::nextOn(Symbol const*) const notreducible.cc: State::notReducible(unsigned int) nextState(Next&) Full name: State::nextState(Next&) Source: nextstate.cc Used By: construct.cc: State::construct() NonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) Full name: NonTerminal::NonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) Source: nonterminal1.cc Used By: augmentgrammar.cc: Rules::augmentGrammar(Symbol*) definenonterminal.cc: Parser::defineNonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) requirenonterminal.cc: Parser::requireNonTerminal(std::__cxx11::basic_string, std::allocator > const&) usesymbol.cc: Parser::useSymbol() nonTerminalSymbol(NonTerminal const*, std::ostream&) Full name: Writer::nonTerminalSymbol(NonTerminal const*, std::ostream&) Source: nonterminalsymbol.cc Used By: symbolicnames.cc: Writer::symbolicNames() const notReducible(unsigned int) Full name: State::notReducible(unsigned int) Source: notreducible.cc Used By: setitems.cc: State::setItems() octal() Full name: Scanner::octal() Source: octal.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) open(unsigned int, std::__cxx11::basic_string, std::allocator > const&) Full name: Block::open(unsigned int, std::__cxx11::basic_string, std::allocator > const&) Source: open.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) openRule(std::__cxx11::basic_string, std::allocator > const&) Full name: Parser::openRule(std::__cxx11::basic_string, std::allocator > const&) Source: openrule.cc Used By: parse.cc: Parser::executeAction(int) operator()(std::__cxx11::basic_string, std::allocator > const&) Full name: Block::operator()(std::__cxx11::basic_string, std::allocator > const&) Source: opfuncharp.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) operator+=(FirstSet const&) Full name: FirstSet::operator+=(FirstSet const&) Source: operatorplusis1.cc Used By: setfirst.cc: NonTerminal::setFirst(NonTerminal*) operatorplusis.cc: LookaheadSet::operator+=(LookaheadSet const&) operatorplusis2.cc: LookaheadSet::operator+=(FirstSet const&) firstbeyonddot.cc: Item::firstBeyondDot(FirstSet*) const operator+=(LookaheadSet const&) Full name: LookaheadSet::operator+=(LookaheadSet const&) Source: operatorplusis.cc Used By: enlargela.cc: StateItem::enlargeLA(LookaheadSet const&) distributelasetof.cc: State::distributeLAsetOf(StateItem&) operator+=(std::set, std::allocator > const&) Full name: FirstSet::operator+=(std::set, std::allocator > const&) Source: operatorplusis2.cc Used By: operatorplusis1.cc: FirstSet::operator+=(FirstSet const&) operator-=(LookaheadSet const&) Full name: LookaheadSet::operator-=(LookaheadSet const&) Source: operatorsubis.cc Used By: removeconflicts.cc: RRConflict::removeConflicts(std::vector >&) operator-=(Symbol const*) Full name: LookaheadSet::operator-=(Symbol const*) Source: operatorsubis2.cc Used By: removereductions.cc: SRConflict::removeReductions(std::vector >&) operator<<(std::ostream&, AtDollar const&) Full name: operator<<(std::ostream&, AtDollar const&) Source: operatorinsert.cc Used By: operatorinsert.cc: operator<<(std::ostream&, Block const&) operator<<(std::ostream&, LookaheadSet const&) Full name: operator<<(std::ostream&, LookaheadSet const&) Source: operatorinsert.cc Used By: itemcontext.cc: StateItem::itemContext(std::ostream&) const insert.cc: RRConflict::insert(std::ostream&) const operator==(Item const&) const Full name: Item::operator==(Item const&) const Source: operatorequal.cc Used By: containskernelitem.cc: StateItem::containsKernelItem(Item const&, unsigned int, std::vector > const&) operator>=(LookaheadSet const&) const Full name: LookaheadSet::operator>=(LookaheadSet const&) const Source: operatorgreaterequal.cc Used By: enlargela.cc: StateItem::enlargeLA(LookaheadSet const&) Options() Full name: Options::Options() Source: options1.cc Used By: instance.cc: Options::instance() ParserBase() Full name: ParserBase::ParserBase() Source: parse.cc Used By: parser1.cc: Parser::Parser(Rules&) plainItem(std::ostream&) const Full name: StateItem::plainItem(std::ostream&) const Source: plainitem.cc Used By: data.cc: GLOBALS data.cc 19data.o plainItem(std::ostream&) const Full name: Item::plainItem(std::ostream&) const Source: plainitem.cc Used By: data.cc: GLOBALS data.cc 15data.o plainWarnings() Full name: Global::plainWarnings() Source: plainwarnings.cc Used By: unused.cc: NonTerminal::unused(NonTerminal const*) showunusednonterminals.cc: Rules::showUnusedNonTerminals() const generator1.cc: Generator::Generator(Rules const&, std::unordered_map, std::allocator >, std::__cxx11::basic_string, std::allocator >, std::hash, std::allocator > >, std::equal_to, std::allocator > >, std::allocator, std::allocator > const, std::__cxx11::basic_string, std::allocator > > > > const&) unused.cc: Terminal::unused(Terminal const*) unused.cc: Production::unused(Production const*) pNrDotItem(std::ostream&) const Full name: Item::pNrDotItem(std::ostream&) const Source: pnrdotitem.cc Used By: insertext.cc: State::insertExt(std::ostream&) const polymorphic(std::ostream&) const Full name: Generator::polymorphic(std::ostream&) const Source: polymorphic.cc Used By: data.cc: GLOBALS data.cc 28data.o polymorphicInline(std::ostream&) const Full name: Generator::polymorphicInline(std::ostream&) const Source: polymorphicinline.cc Used By: data.cc: GLOBALS data.cc 28data.o polymorphicSpecializations(std::ostream&) const Full name: Generator::polymorphicSpecializations(std::ostream&) const Source: polymorphicspecializations.cc Used By: data.cc: GLOBALS data.cc 28data.o popStream() Full name: ScannerBase::popStream() Source: lex.cc Used By: popstream.cc: Scanner::popStream() popStream() Full name: Scanner::popStream() Source: popstream.cc Used By: lex.cc: Scanner::lex__() predefine(Terminal const*) Full name: Parser::predefine(Terminal const*) Source: predefine.cc Used By: parser1.cc: Parser::Parser(Rules&) preIncludes(std::ostream&) const Full name: Generator::preIncludes(std::ostream&) const Source: preincludes.cc Used By: data.cc: GLOBALS data.cc 28data.o print(std::ostream&) const Full name: Generator::print(std::ostream&) const Source: print.cc Used By: data.cc: GLOBALS data.cc 28data.o processShiftReduceConflict(__gnu_cxx::__normal_iterator > > const&, unsigned int) Full name: SRConflict::processShiftReduceConflict(__gnu_cxx::__normal_iterator > > const&, unsigned int) Source: processshiftreduceconflict.cc Used By: visitreduction.cc: SRConflict::visitReduction(unsigned int) Production(Symbol const*, unsigned int) Full name: Production::Production(Symbol const*, unsigned int) Source: production1.cc Used By: addproduction.cc: Rules::addProduction(unsigned int) sethiddenaction.cc: Rules::setHiddenAction(Block const&) productionInfo(Production const*, std::ostream&) Full name: Writer::productionInfo(Production const*, std::ostream&) Source: productioninfo.cc Used By: productions.cc: Writer::productions() const productions() const Full name: Writer::productions() const Source: productions.cc Used By: staticdata.cc: Generator::staticData(std::ostream&) const pushStream(std::__cxx11::basic_string, std::allocator > const&) Full name: ScannerBase::pushStream(std::__cxx11::basic_string, std::allocator > const&) Source: lex.cc Used By: handlexstring.cc: Scanner::handleXstring(unsigned int) quotedName(std::ostream&) const Full name: Terminal::quotedName(std::ostream&) const Source: quotedname.cc Used By: setprecedence.cc: Rules::setPrecedence(Terminal const*) rawString() Full name: Scanner::rawString() Source: handlerawstring.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) redo(unsigned int) Full name: ScannerBase::redo(unsigned int) Source: lex.cc Used By: handlexstring.cc: Scanner::handleXstring(unsigned int) reduction(FBB::Table&, StateItem const&) Full name: Writer::reduction(FBB::Table&, StateItem const&) Source: reduction.cc Used By: reductions.cc: Writer::reductions(FBB::Table&, State const&) reductions(FBB::Table&, State const&) Full name: Writer::reductions(FBB::Table&, State const&) Source: reductions.cc Used By: srtable.cc: Writer::srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) reductionSymbol(Element const*, unsigned int, FBB::Table&) Full name: Writer::reductionSymbol(Element const*, unsigned int, FBB::Table&) Source: reductionsymbol.cc Used By: reduction.cc: Writer::reduction(FBB::Table&, StateItem const&) removeConflicts(std::vector >&) Full name: RRConflict::removeConflicts(std::vector >&) Source: removeconflicts.cc Used By: checkconflicts.cc: State::checkConflicts() removeReductions(std::vector >&) Full name: SRConflict::removeReductions(std::vector >&) Source: removereductions.cc Used By: checkconflicts.cc: State::checkConflicts() removeShift(RmShift const&, std::vector >&, unsigned int*) Full name: Next::removeShift(RmShift const&, std::vector >&, unsigned int*) Source: removeshift.cc Used By: removeshifts.cc: SRConflict::removeShifts(std::vector >&) removeShifts(std::vector >&) Full name: SRConflict::removeShifts(std::vector >&) Source: removeshifts.cc Used By: checkconflicts.cc: State::checkConflicts() replace(std::__cxx11::basic_string, std::allocator >&, char, std::__cxx11::basic_string, std::allocator > const&) Full name: Generator::replace(std::__cxx11::basic_string, std::allocator >&, char, std::__cxx11::basic_string, std::allocator > const&) Source: replace.cc Used By: conflicts.cc: Generator::conflicts() const replaceBaseFlag(std::__cxx11::basic_string, std::allocator >&) const Full name: Generator::replaceBaseFlag(std::__cxx11::basic_string, std::allocator >&) const Source: replacebaseflag.cc Used By: filter.cc: Generator::filter(std::istream&, std::ostream&, bool) const insert2.cc: Generator::insert(std::ostream&, unsigned int, char const*) const requiredTokens(std::ostream&) const Full name: Generator::requiredTokens(std::ostream&) const Source: requiredtokens.cc Used By: data.cc: GLOBALS data.cc 28data.o requireNonTerminal(std::__cxx11::basic_string, std::allocator > const&) Full name: Parser::requireNonTerminal(std::__cxx11::basic_string, std::allocator > const&) Source: requirenonterminal.cc Used By: openrule.cc: Parser::openRule(std::__cxx11::basic_string, std::allocator > const&) reRead(unsigned int) Full name: ScannerBase::Input::reRead(unsigned int) Source: lex.cc Used By: returntypespec.cc: Scanner::returnTypeSpec() returnQuoted(void (Scanner::*)()) Full name: Scanner::returnQuoted(void (Scanner::*)()) Source: returnquoted.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) returnSingle(AtDollar const&) const Full name: Parser::returnSingle(AtDollar const&) const Source: returnsingle.cc Used By: handledollar.cc: Parser::handleDollar(Block&, AtDollar const&, int) returnTypeSpec() Full name: Scanner::returnTypeSpec() Source: returntypespec.cc Used By: lex.cc: Scanner::executeAction__(unsigned int) RmReduction(unsigned int, unsigned int, Symbol const*, bool) Full name: RmReduction::RmReduction(unsigned int, unsigned int, Symbol const*, bool) Source: rmreduction1.cc Used By: handlesrconflict.cc: SRConflict::handleSRconflict(unsigned int, __gnu_cxx::__normal_iterator > > const&, unsigned int) RmShift(unsigned int, bool) Full name: RmShift::RmShift(unsigned int, bool) Source: rmshift1.cc Used By: handlesrconflict.cc: SRConflict::handleSRconflict(unsigned int, __gnu_cxx::__normal_iterator > > const&, unsigned int) RRConflict(std::vector > const&, std::vector > const&) Full name: RRConflict::RRConflict(std::vector > const&, std::vector > const&) Source: rrconflict1.cc Used By: state1.cc: State::State(unsigned int) RRData(LookaheadSet) Full name: RRData::RRData(LookaheadSet) Source: rrdata1.cc Used By: comparereductions.cc: RRConflict::compareReductions(unsigned int) s_acceptProductionNr Full name: Rules::s_acceptProductionNr Source: data.cc Used By: augmentgrammar.cc: Rules::augmentGrammar(Symbol*) s_acceptState Full name: State::s_acceptState Source: data.cc Used By: define.cc: State::define(Rules const&) srtable.cc: Writer::srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) s_at Full name: Generator::s_at Source: data.cc Used By: replaceatkey.cc: Generator::replaceAtKey(std::__cxx11::basic_string, std::allocator >&, unsigned int) const replacebaseflag.cc: Generator::replaceBaseFlag(std::__cxx11::basic_string, std::allocator >&) const s_atBol Full name: Generator::s_atBol Source: data.cc Used By: bolat.cc: Generator::bolAt(std::ostream&, std::__cxx11::basic_string, std::allocator >&, std::istream&, bool&) const s_atFlag Full name: Generator::s_atFlag Source: data.cc Used By: replacebaseflag.cc: Generator::replaceBaseFlag(std::__cxx11::basic_string, std::allocator >&) const s_counter Full name: NonTerminal::s_counter Source: data.cc Used By: setfirst.cc: NonTerminal::setFirst(NonTerminal*) determinefirst.cc: Rules::determineFirst() s_debug__ Full name: ScannerBase::s_debug__ Source: lex.cc Used By: checkendofrawstring.cc: Scanner::checkEndOfRawString() eoln.cc: Scanner::eoln() handlerawstring.cc: Scanner::rawString() handlexstring.cc: Scanner::handleXstring(unsigned int) returnquoted.cc: Scanner::returnQuoted(void (Scanner::*)()) returntypespec.cc: Scanner::returnTypeSpec() parse.cc: Parser::executeAction(int) s_defaultBaseClassSkeleton Full name: Options::s_defaultBaseClassSkeleton Source: data.cc Used By: setskeletons.cc: Options::setSkeletons() s_defaultClassName Full name: Options::s_defaultClassName Source: data.cc Used By: setbasicstrings.cc: Options::setBasicStrings() s_defaultClassSkeleton Full name: Options::s_defaultClassSkeleton Source: data.cc Used By: setskeletons.cc: Options::setSkeletons() s_defaultImplementationSkeleton Full name: Options::s_defaultImplementationSkeleton Source: data.cc Used By: setskeletons.cc: Options::setSkeletons() s_defaultParsefunSkeleton Full name: Options::s_defaultParsefunSkeleton Source: data.cc Used By: setskeletons.cc: Options::setSkeletons() s_defaultParsefunSource Full name: Options::s_defaultParsefunSource Source: data.cc Used By: setpathstrings.cc: Options::setPathStrings() s_defaultPolymorphicInlineSkeleton Full name: Options::s_defaultPolymorphicInlineSkeleton Source: data.cc Used By: setskeletons.cc: Options::setSkeletons() s_defaultPolymorphicSkeleton Full name: Options::s_defaultPolymorphicSkeleton Source: data.cc Used By: setskeletons.cc: Options::setSkeletons() s_defaultScannerClassName Full name: Options::s_defaultScannerClassName Source: data.cc Used By: setbasicstrings.cc: Options::setBasicStrings() s_defaultScannerMatchedTextFunction Full name: Options::s_defaultScannerMatchedTextFunction Source: data.cc Used By: setbasicstrings.cc: Options::setBasicStrings() s_defaultScannerTokenFunction Full name: Options::s_defaultScannerTokenFunction Source: data.cc Used By: setbasicstrings.cc: Options::setBasicStrings() s_defaultSkeletonDirectory Full name: Options::s_defaultSkeletonDirectory Source: data.cc Used By: setbasicstrings.cc: Options::setBasicStrings() s_dfaBase__ Full name: ScannerBase::s_dfaBase__ Source: lex.cc Used By: checkendofrawstring.cc: Scanner::checkEndOfRawString() eoln.cc: Scanner::eoln() handlerawstring.cc: Scanner::rawString() handlexstring.cc: Scanner::handleXstring(unsigned int) returnquoted.cc: Scanner::returnQuoted(void (Scanner::*)()) returntypespec.cc: Scanner::returnTypeSpec() parse.cc: Parser::executeAction(int) s_eofTerminal Full name: Rules::s_eofTerminal Source: data.cc Used By: operatorsubis2.cc: LookaheadSet::operator-=(Symbol const*) reduction.cc: Writer::reduction(FBB::Table&, StateItem const&) srtable.cc: Writer::srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) selectsymbolic.cc: Generator::selectSymbolic(Terminal const*, std::vector >&) parser1.cc: Parser::Parser(Rules&) s_errorTerminal Full name: Rules::s_errorTerminal Source: data.cc Used By: notreducible.cc: State::notReducible(unsigned int) parser1.cc: Parser::Parser(Rules&) s_insert Full name: State::s_insert Source: data.cc Used By: allstates.cc: State::allStates() define.cc: State::define(Rules const&) s_insertPtr Full name: Item::s_insertPtr Source: data.cc Used By: itemcontext.cc: StateItem::itemContext(std::ostream&) const plainitem.cc: StateItem::plainItem(std::ostream&) const insertext.cc: State::insertExt(std::ostream&) const s_insertPtr Full name: NonTerminal::s_insertPtr Source: data.cc Used By: v.cc: NonTerminal::insert(std::ostream&) const showfirst.cc: Rules::showFirst() const insert.cc: Item::insert(std::ostream&, Production const*) const transitionkernel.cc: Next::transitionKernel(std::ostream&) const insertext.cc: State::insertExt(std::ostream&) const reductionsymbol.cc: Writer::reductionSymbol(Element const*, unsigned int, FBB::Table&) transition.cc: Writer::transition(Next const&, FBB::Table&) multiplydefined.cc: Parser::multiplyDefined(Symbol const*) s_insertPtr Full name: Terminal::s_insertPtr Source: data.cc Used By: setprecedence.cc: GLOBALS setprecedence.cc 12setprecedence.o showfirst.cc: Rules::showFirst() const showterminals.cc: GLOBALS showterminals.cc 12showterminals.o showunusedrules.cc: Rules::showUnusedRules() const showunusedterminals.cc: Rules::showUnusedTerminals() const derivesentence.cc: GLOBALS derivesentence.cc 13derivesentence.o insert.cc: GLOBALS insert.cc 14insert.o insert.cc: GLOBALS insert.cc 15insert.o transition.cc: GLOBALS transition.cc 23transition.o transitionkernel.cc: GLOBALS transitionkernel.cc 23transitionkernel.o insert.cc: GLOBALS insert.cc 24insert.o showconflicts.cc: GLOBALS showconflicts.cc 24showconflicts.o insertext.cc: State::insertExt(std::ostream&) const inserttoken.cc: GLOBALS inserttoken.cc 26inserttoken.o reductionsymbol.cc: Writer::reductionSymbol(Element const*, unsigned int, FBB::Table&) srtable.cc: Writer::srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) terminalsymbol.cc: GLOBALS terminalsymbol.cc 26terminalsymbol.o transition.cc: Writer::transition(Next const&, FBB::Table&) filter.cc: Generator::filter(std::istream&, std::ostream&, bool) const multiplydefined.cc: GLOBALS multiplydefined.cc 2multiplydefined.o destructor.cc: GLOBALS destructor.cc 8destructor.o setvalue.cc: GLOBALS setvalue.cc 8setvalue.o unused.cc: GLOBALS unused.cc 8unused.o standard.cc: GLOBALS standard.cc 9standard.o s_insertPtr Full name: Next::s_insertPtr Source: data.cc Used By: insertext.cc: State::insertExt(std::ostream&) const insertstd.cc: State::insertStd(std::ostream&) const s_insertPtr Full name: StateItem::s_insertPtr Source: data.cc Used By: insertext.cc: State::insertExt(std::ostream&) const insertstd.cc: State::insertStd(std::ostream&) const s_lastLineNr Full name: Rules::s_lastLineNr Source: data.cc Used By: addproduction.cc: Rules::addProduction(unsigned int) newrule.cc: Rules::newRule(NonTerminal*, std::__cxx11::basic_string, std::allocator > const&, unsigned int) sethiddenaction.cc: Rules::setHiddenAction(Block const&) s_locationValue Full name: Parser::s_locationValue Source: data.cc Used By: handleatsign.cc: Parser::handleAtSign(Block&, AtDollar const&, int) s_locationValueStack Full name: Parser::s_locationValueStack Source: data.cc Used By: handleatsign.cc: Parser::handleAtSign(Block&, AtDollar const&, int) s_maxValue Full name: Terminal::s_maxValue Source: data.cc Used By: assignnonterminalnumbers.cc: Rules::assignNonTerminalNumbers() terminal1.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, unsigned int, Terminal::Association, std::__cxx11::basic_string, std::allocator > const&) s_nConflicts Full name: RRConflict::s_nConflicts Source: data.cc Used By: comparereductions.cc: RRConflict::compareReductions(unsigned int) define.cc: State::define(Rules const&) s_nConflicts Full name: SRConflict::s_nConflicts Source: data.cc Used By: handlesrconflict.cc: SRConflict::handleSRconflict(unsigned int, __gnu_cxx::__normal_iterator > > const&, unsigned int) define.cc: State::define(Rules const&) s_nExpectedConflicts Full name: Rules::s_nExpectedConflicts Source: data.cc Used By: define.cc: State::define(Rules const&) parse.cc: Parser::executeAction(int) s_nHidden Full name: Parser::s_nHidden Source: data.cc Used By: nexthiddenname.cc: Parser::nextHiddenName[abi:cxx11]() s_nr Full name: Production::s_nr Source: data.cc Used By: production1.cc: Production::Production(Symbol const*, unsigned int) s_number Full name: NonTerminal::s_number Source: data.cc Used By: assignnonterminalnumbers.cc: Rules::assignNonTerminalNumbers() s_options Full name: Options::s_options Source: data.cc Used By: instance.cc: Options::instance() s_precedence Full name: Terminal::s_precedence Source: data.cc Used By: expectrules.cc: Parser::expectRules() parse.cc: Parser::executeAction(int) terminal1.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, unsigned int, Terminal::Association, std::__cxx11::basic_string, std::allocator > const&) terminal2.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) s_semanticValue Full name: Parser::s_semanticValue Source: data.cc Used By: handledollar.cc: Parser::handleDollar(Block&, AtDollar const&, int) savedollar1.cc: Parser::saveDollar1(Block&, int) s_semanticValueStack Full name: Parser::s_semanticValueStack Source: data.cc Used By: handledollar.cc: Parser::handleDollar(Block&, AtDollar const&, int) savedollar1.cc: Parser::saveDollar1(Block&, int) s_startProduction Full name: Production::s_startProduction Source: data.cc Used By: initialstate.cc: State::initialState() s_startSymbol Full name: Rules::s_startSymbol Source: data.cc Used By: augmentgrammar.cc: Rules::augmentGrammar(Symbol*) derivesentence.cc: Grammar::deriveSentence() s_state Full name: State::s_state Source: data.cc Used By: allstates.cc: State::allStates() define.cc: State::define(Rules const&) determinelasets.cc: State::determineLAsets() findkernel.cc: State::findKernel(std::vector > const&) const inspecttransitions.cc: State::inspectTransitions(std::set, std::allocator >&) newstate.cc: State::newState() nextstate.cc: State::nextState(Next&) srtables.cc: Writer::srTables() const statesarray.cc: Writer::statesArray() const s_stateName Full name: StateType::s_stateName Source: data.cc Used By: srtable.cc: Writer::srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) s_stype__ Full name: Parser::s_stype__ Source: data.cc Used By: checkfirsttype.cc: Parser::checkFirstType() semtag.cc: Parser::semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const s_threadConst Full name: Writer::s_threadConst Source: data.cc Used By: srtable.cc: Writer::srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) statesarray.cc: Writer::statesArray() const writer0.cc: Writer::Writer(std::__cxx11::basic_string, std::allocator > const&, Rules const&) s_undefined Full name: NonTerminal::s_undefined Source: data.cc Used By: undefined.cc: NonTerminal::undefined(NonTerminal const*) showunusednonterminals.cc: Rules::showUnusedNonTerminals() const s_unused Full name: NonTerminal::s_unused Source: data.cc Used By: unused.cc: NonTerminal::unused(NonTerminal const*) showunusednonterminals.cc: Rules::showUnusedNonTerminals() const s_unused Full name: Production::s_unused Source: data.cc Used By: showunusedrules.cc: Rules::showUnusedRules() const unused.cc: Production::unused(Production const*) s_value Full name: Terminal::s_value Source: data.cc Used By: setvalue.cc: Terminal::setValue(unsigned int) terminal1.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, unsigned int, Terminal::Association, std::__cxx11::basic_string, std::allocator > const&) s_valueSet Full name: Terminal::s_valueSet Source: data.cc Used By: setunique.cc: Terminal::setUnique(unsigned int) setvalue.cc: Terminal::setValue(unsigned int) s_yylex Full name: Options::s_yylex Source: data.cc Used By: setbasicstrings.cc: Options::setBasicStrings() s_YYText Full name: Options::s_YYText Source: data.cc Used By: setbasicstrings.cc: Options::setBasicStrings() saveDollar1(Block&, int) Full name: Parser::saveDollar1(Block&, int) Source: savedollar1.cc Used By: substituteblock.cc: Parser::substituteBlock(int, Block&) Scanner(std::__cxx11::basic_string, std::allocator > const&) Full name: Scanner::Scanner(std::__cxx11::basic_string, std::allocator > const&) Source: scanner1.cc Used By: parser1.cc: Parser::Parser(Rules&) ScannerBase(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) Full name: ScannerBase::ScannerBase(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) Source: lex.cc Used By: scanner1.cc: Scanner::Scanner(std::__cxx11::basic_string, std::allocator > const&) scannerH(std::ostream&) const Full name: Generator::scannerH(std::ostream&) const Source: scannerh.cc Used By: data.cc: GLOBALS data.cc 28data.o scannerObject(std::ostream&) const Full name: Generator::scannerObject(std::ostream&) const Source: scannerobject.cc Used By: data.cc: GLOBALS data.cc 28data.o selectSymbolic(Terminal const*, std::vector >&) Full name: Generator::selectSymbolic(Terminal const*, std::vector >&) Source: selectsymbolic.cc Used By: tokens.cc: Generator::tokens(std::ostream&) const semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const Full name: Parser::semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const Source: semtag.cc Used By: returnpolymorphic.cc: Parser::returnPolymorphic[abi:cxx11](AtDollar const&) const returnsingle.cc: Parser::returnSingle(AtDollar const&) const returnunion.cc: Parser::returnUnion[abi:cxx11](AtDollar const&) const setAccessorVariables() Full name: Options::setAccessorVariables() Source: setaccessorvariables.cc Used By: cleanup.cc: Parser::cleanup() setBasicStrings() Full name: Options::setBasicStrings() Source: setbasicstrings.cc Used By: setaccessorvariables.cc: Options::setAccessorVariables() setBooleans() Full name: Options::setBooleans() Source: setbooleans.cc Used By: setaccessorvariables.cc: Options::setAccessorVariables() setDebug(bool) Full name: ScannerBase::setDebug(bool) Source: lex.cc Used By: parser1.cc: Parser::Parser(Rules&) setFirst(NonTerminal*) Full name: NonTerminal::setFirst(NonTerminal*) Source: setfirst.cc Used By: determinefirst.cc: Rules::determineFirst() setHiddenAction(Block const&) Full name: Rules::setHiddenAction(Block const&) Source: sethiddenaction.cc Used By: nestedblock.cc: Parser::nestedBlock(Block&) setIdx(RRData::Keep, unsigned int, unsigned int) Full name: RRData::setIdx(RRData::Keep, unsigned int, unsigned int) Source: setidx.cc Used By: comparereductions.cc: RRConflict::compareReductions(unsigned int) setItems() Full name: State::setItems() Source: setitems.cc Used By: construct.cc: State::construct() setLineNrs() const Full name: Scanner::setLineNrs() const Source: setlinenrs.cc Used By: eoln.cc: Scanner::eoln() handlexstring.cc: Scanner::handleXstring(unsigned int) lex.cc: Scanner::executeAction__(unsigned int) settags.cc: Scanner::setTags() const setLocationDecl(std::__cxx11::basic_string, std::allocator > const&) Full name: Options::setLocationDecl(std::__cxx11::basic_string, std::allocator > const&) Source: setlocationdecl.cc Used By: parse.cc: Parser::executeAction(int) setLtype() Full name: Options::setLtype() Source: setltype.cc Used By: parse.cc: Parser::executeAction(int) setNonTerminalTypes() Full name: Rules::setNonTerminalTypes() Source: setnonterminaltypes.cc Used By: expectrules.cc: Parser::expectRules() setOpt(std::__cxx11::basic_string, std::allocator >*, char const*, std::__cxx11::basic_string, std::allocator > const&) Full name: Options::setOpt(std::__cxx11::basic_string, std::allocator >*, char const*, std::__cxx11::basic_string, std::allocator > const&) Source: setopt.cc Used By: setbasicstrings.cc: Options::setBasicStrings() setPath(std::__cxx11::basic_string, std::allocator >*, int, std::__cxx11::basic_string, std::allocator > const&, char const*, char const*) Full name: Options::setPath(std::__cxx11::basic_string, std::allocator >*, int, std::__cxx11::basic_string, std::allocator > const&, char const*, char const*) Source: setpath2.cc Used By: setpathstrings.cc: Options::setPathStrings() setPathStrings() Full name: Options::setPathStrings() Source: setpathstrings.cc Used By: setaccessorvariables.cc: Options::setAccessorVariables() setPolymorphicDecl() Full name: Parser::setPolymorphicDecl() Source: setpolymorphicdecl.cc Used By: parse.cc: Parser::executeAction(int) setPolymorphicDecl() Full name: Options::setPolymorphicDecl() Source: setpolymorphicdecl.cc Used By: setpolymorphicdecl.cc: Parser::setPolymorphicDecl() setPrecedence(int) Full name: Parser::setPrecedence(int) Source: setprecedence.cc Used By: parse.cc: Parser::executeAction(int) setPrecedence(Terminal const*) Full name: Production::setPrecedence(Terminal const*) Source: setprecedence.cc Used By: setprecedence.cc: Rules::setPrecedence(Terminal const*) updateprecedence.cc: Rules::updatePrecedence(Production*, std::vector > const&) setPrecedence(Terminal const*) Full name: Rules::setPrecedence(Terminal const*) Source: setprecedence.cc Used By: setprecedence.cc: Parser::setPrecedence(int) setPrintTokens() Full name: Options::setPrintTokens() Source: setprinttokens.cc Used By: parse.cc: Parser::executeAction(int) setQuotedStrings() Full name: Options::setQuotedStrings() Source: setquotedstrings.cc Used By: setaccessorvariables.cc: Options::setAccessorVariables() setRequiredTokens(unsigned int) Full name: Options::setRequiredTokens(unsigned int) Source: setrequiredtokens.cc Used By: parse.cc: Parser::executeAction(int) setSkeletons() Full name: Options::setSkeletons() Source: setskeletons.cc Used By: setaccessorvariables.cc: Options::setAccessorVariables() setStart() Full name: Parser::setStart() Source: setstart.cc Used By: parse.cc: Parser::executeAction(int) setStype() Full name: Options::setStype() Source: setstype.cc Used By: parse.cc: Parser::executeAction(int) setTags() const Full name: Scanner::setTags() const Source: settags.cc Used By: handlexstring.cc: Scanner::handleXstring(unsigned int) popstream.cc: Scanner::popStream() scanner1.cc: Scanner::Scanner(std::__cxx11::basic_string, std::allocator > const&) setUnionDecl() Full name: Parser::setUnionDecl() Source: setuniondecl.cc Used By: parse.cc: Parser::executeAction(int) setUnionDecl(std::__cxx11::basic_string, std::allocator > const&) Full name: Options::setUnionDecl(std::__cxx11::basic_string, std::allocator > const&) Source: setuniondecl.cc Used By: setuniondecl.cc: Parser::setUnionDecl() setUnique(unsigned int) Full name: Terminal::setUnique(unsigned int) Source: setunique.cc Used By: setvalue.cc: Terminal::setValue(unsigned int) setValue(unsigned int) Full name: Terminal::setValue(unsigned int) Source: setvalue.cc Used By: definetokenname.cc: Parser::defineTokenName(std::__cxx11::basic_string, std::allocator > const&, bool) setVerbosity() Full name: Options::setVerbosity() Source: setverbosity.cc Used By: cleanup.cc: Parser::cleanup() showConflicts(Rules const&) const Full name: RRConflict::showConflicts(Rules const&) const Source: showconflicts.cc Used By: define.cc: State::define(Rules const&) showConflicts(Rules const&) const Full name: SRConflict::showConflicts(Rules const&) const Source: showconflicts.cc Used By: define.cc: State::define(Rules const&) showFilenames() const Full name: Options::showFilenames() const Source: showfilenames.cc Used By: cleanup.cc: Parser::cleanup() solveByAssociation() const Full name: Next::solveByAssociation() const Source: solvebyassociation.cc Used By: handlesrconflict.cc: SRConflict::handleSRconflict(unsigned int, __gnu_cxx::__normal_iterator > > const&, unsigned int) solveByPrecedence(Symbol const*) const Full name: Next::solveByPrecedence(Symbol const*) const Source: solvebyprecedence.cc Used By: handlesrconflict.cc: SRConflict::handleSRconflict(unsigned int, __gnu_cxx::__normal_iterator > > const&, unsigned int) SRConflict(std::vector > const&, std::vector > const&, std::vector > const&) Full name: SRConflict::SRConflict(std::vector > const&, std::vector > const&, std::vector > const&) Source: srconflict1.cc Used By: state1.cc: State::State(unsigned int) srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) Full name: Writer::srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) Source: srtable.cc Used By: srtables.cc: Writer::srTables() const srTables() const Full name: Writer::srTables() const Source: srtables.cc Used By: staticdata.cc: Generator::staticData(std::ostream&) const standard(std::ostream&) const Full name: Production::standard(std::ostream&) const Source: standard.cc Used By: showrules.cc: Rules::showRules() const productioninfo.cc: Writer::productionInfo(Production const*, std::ostream&) checkemptyblocktype.cc: Parser::checkEmptyBlocktype() checkfirsttype.cc: Parser::checkFirstType() errindextoolarge.cc: Parser::errIndexTooLarge(AtDollar const&, int) const errnosemantic.cc: Parser::errNoSemantic(char const*, AtDollar const&, std::__cxx11::basic_string, std::allocator > const&) const handleproductionelement.cc: Parser::handleProductionElement(Meta__::SType&) negativeindex.cc: Parser::negativeIndex(AtDollar const&) const substituteblock.cc: Parser::substituteBlock(int, Block&) warnautoignored.cc: Parser::warnAutoIgnored(char const*, AtDollar const&) const warnautooverride.cc: Parser::warnAutoOverride(AtDollar const&) const warnuntaggedvalue.cc: Parser::warnUntaggedValue(AtDollar const&) const unused.cc: Production::unused(Production const*) State(unsigned int) Full name: State::State(unsigned int) Source: state1.cc Used By: newstate.cc: State::newState() StateItem(Item const&) Full name: StateItem::StateItem(Item const&) Source: stateitem2.cc Used By: addproductions.cc: State::addProductions(Symbol const*, unsigned int) addstate.cc: State::addState(std::vector > const&) initialstate.cc: State::initialState() statesArray() const Full name: Writer::statesArray() const Source: statesarray.cc Used By: staticdata.cc: Generator::staticData(std::ostream&) const staticData(std::ostream&) const Full name: Generator::staticData(std::ostream&) const Source: staticdata.cc Used By: data.cc: GLOBALS data.cc 28data.o storeFilename(std::__cxx11::basic_string, std::allocator > const&) Full name: Production::storeFilename(std::__cxx11::basic_string, std::allocator > const&) Source: storeFilename.cc Used By: newrule.cc: Rules::newRule(NonTerminal*, std::__cxx11::basic_string, std::allocator > const&, unsigned int) stype(std::ostream&) const Full name: Generator::stype(std::ostream&) const Source: stype.cc Used By: data.cc: GLOBALS data.cc 28data.o substituteBlock(int, Block&) Full name: Parser::substituteBlock(int, Block&) Source: substituteblock.cc Used By: installaction.cc: Parser::installAction(Block&) nestedblock.cc: Parser::nestedBlock(Block&) summarizeActions() Full name: State::summarizeActions() Source: summarizeactions.cc Used By: define.cc: State::define(Rules const&) Symbol(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, std::__cxx11::basic_string, std::allocator > const&) Full name: Symbol::Symbol(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, std::__cxx11::basic_string, std::allocator > const&) Source: symbol1.cc Used By: nonterminal1.cc: NonTerminal::NonTerminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) terminal1.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, unsigned int, Terminal::Association, std::__cxx11::basic_string, std::allocator > const&) terminal2.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) symbolicNames() const Full name: Writer::symbolicNames() const Source: symbolicnames.cc Used By: staticdata.cc: Generator::staticData(std::ostream&) const Terminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) Full name: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) Source: terminal2.cc Used By: data.cc: GLOBALS data.cc 12data.o Terminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, unsigned int, Terminal::Association, std::__cxx11::basic_string, std::allocator > const&) Full name: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, unsigned int, Terminal::Association, std::__cxx11::basic_string, std::allocator > const&) Source: terminal1.cc Used By: defineterminal.cc: Parser::defineTerminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) useterminal.cc: Parser::useTerminal() terminalSymbol(Terminal const*, std::ostream&) Full name: Writer::terminalSymbol(Terminal const*, std::ostream&) Source: terminalsymbol.cc Used By: symbolicnames.cc: Writer::symbolicNames() const termToNonterm(Symbol*, Symbol*) Full name: Rules::termToNonterm(Symbol*, Symbol*) Source: termtononterm.cc Used By: requirenonterminal.cc: Parser::requireNonTerminal(std::__cxx11::basic_string, std::allocator > const&) threading(std::ostream&) const Full name: Generator::threading(std::ostream&) const Source: threading.cc Used By: data.cc: GLOBALS data.cc 28data.o tokens(std::ostream&) const Full name: Generator::tokens(std::ostream&) const Source: tokens.cc Used By: data.cc: GLOBALS data.cc 28data.o transition(Next const&, FBB::Table&) Full name: Writer::transition(Next const&, FBB::Table&) Source: transition.cc Used By: transitions.cc: Writer::transitions(FBB::Table&, std::vector > const&) transition(std::ostream&) const Full name: Next::transition(std::ostream&) const Source: transition.cc Used By: data.cc: GLOBALS data.cc 23data.o transitionKernel(std::ostream&) const Full name: Next::transitionKernel(std::ostream&) const Source: transitionkernel.cc Used By: insertext.cc: State::insertExt(std::ostream&) const transitions(FBB::Table&, std::vector > const&) Full name: Writer::transitions(FBB::Table&, std::vector > const&) Source: transitions.cc Used By: srtable.cc: Writer::srTable(State const*, std::__cxx11::basic_string, std::allocator > const&, FBB::Table&, std::ostream&) undefined(NonTerminal const*) Full name: NonTerminal::undefined(NonTerminal const*) Source: undefined.cc Used By: showunusednonterminals.cc: Rules::showUnusedNonTerminals() const undelimit(std::__cxx11::basic_string, std::allocator > const&) Full name: Options::undelimit(std::__cxx11::basic_string, std::allocator > const&) Source: undelimit.cc Used By: handlexstring.cc: Scanner::handleXstring(unsigned int) cleandir.cc: Options::cleanDir(std::__cxx11::basic_string, std::allocator >&, bool) setopt.cc: Options::setOpt(std::__cxx11::basic_string, std::allocator >*, char const*, std::__cxx11::basic_string, std::allocator > const&) definepathname.cc: Parser::definePathname(std::__cxx11::basic_string, std::allocator >*) unused(NonTerminal const*) Full name: NonTerminal::unused(NonTerminal const*) Source: unused.cc Used By: showunusednonterminals.cc: Rules::showUnusedNonTerminals() const unused(Production const*) Full name: Production::unused(Production const*) Source: unused.cc Used By: showunusedrules.cc: Rules::showUnusedRules() const unused(Terminal const*) Full name: Terminal::unused(Terminal const*) Source: unused.cc Used By: showunusedterminals.cc: Rules::showUnusedTerminals() const updatePrecedence(Production*, std::vector > const&) Full name: Rules::updatePrecedence(Production*, std::vector > const&) Source: updateprecedence.cc Used By: updateprecedences.cc: Rules::updatePrecedences() useSymbol() Full name: Parser::useSymbol() Source: usesymbol.cc Used By: parse.cc: Parser::executeAction(int) useTerminal() Full name: Parser::useTerminal() Source: useterminal.cc Used By: parse.cc: Parser::executeAction(int) v_firstSet() const Full name: NonTerminal::v_firstSet() const Source: v.cc Used By: destructor.cc: NonTerminal::~NonTerminal() v_value() const Full name: NonTerminal::v_value() const Source: v.cc Used By: destructor.cc: NonTerminal::~NonTerminal() valueQuotedName(std::ostream&) const Full name: Terminal::valueQuotedName(std::ostream&) const Source: valuequotedname.cc Used By: showterminals.cc: Rules::showTerminals() const showunusedterminals.cc: Rules::showUnusedTerminals() const vectorIdx(unsigned int) const Full name: Production::vectorIdx(unsigned int) const Source: vectoridx.cc Used By: stype.cc: Rules::sType[abi:cxx11](unsigned int) const beyonddotisnonterminal.cc: Item::beyondDotIsNonTerminal() const firstbeyonddot.cc: Item::firstBeyondDot(FirstSet*) const hasrightofdot.cc: Item::hasRightOfDot(Symbol const&) const notreducible.cc: State::notReducible(unsigned int) warnautooverride.cc: Parser::warnAutoOverride(AtDollar const&) const version Full name: version Source: version.cc Used By: usage.cc: usage(std::__cxx11::basic_string, std::allocator > const&) filter.cc: Generator::filter(std::istream&, std::ostream&, bool) const visitReduction(unsigned int) Full name: SRConflict::visitReduction(unsigned int) Source: visitreduction.cc Used By: inspect.cc: SRConflict::inspect() visitReduction(unsigned int) Full name: RRConflict::visitReduction(unsigned int) Source: visitreduction.cc Used By: inspect.cc: RRConflict::inspect() warnAutoIgnored(char const*, AtDollar const&) const Full name: Parser::warnAutoIgnored(char const*, AtDollar const&) const Source: warnautoignored.cc Used By: semtag.cc: Parser::semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const warnAutoOverride(AtDollar const&) const Full name: Parser::warnAutoOverride(AtDollar const&) const Source: warnautooverride.cc Used By: semtag.cc: Parser::semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const warnUntaggedValue(AtDollar const&) const Full name: Parser::warnUntaggedValue(AtDollar const&) const Source: warnuntaggedvalue.cc Used By: semtag.cc: Parser::semTag(char const*, AtDollar const&, bool (Parser::*)(std::__cxx11::basic_string, std::allocator > const&) const) const Writer(std::__cxx11::basic_string, std::allocator > const&, Rules const&) Full name: Writer::Writer(std::__cxx11::basic_string, std::allocator > const&, Rules const&) Source: writer0.cc Used By: generator1.cc: Generator::Generator(Rules const&, std::unordered_map, std::allocator >, std::__cxx11::basic_string, std::allocator >, std::hash, std::allocator > >, std::equal_to, std::allocator > >, std::allocator, std::allocator > const, std::__cxx11::basic_string, std::allocator > > > > const&) year Full name: year Source: version.cc Used By: usage.cc: usage(std::__cxx11::basic_string, std::allocator > const&) ~Element() Full name: Element::~Element() Source: destructor.cc Used By: destructor.cc: Symbol::~Symbol() symbol1.cc: Symbol::Symbol(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, std::__cxx11::basic_string, std::allocator > const&) ~Symbol() Full name: Symbol::~Symbol() Source: destructor.cc Used By: destructor.cc: NonTerminal::~NonTerminal() destructor.cc: Terminal::~Terminal() terminal1.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, Symbol::Type, unsigned int, Terminal::Association, std::__cxx11::basic_string, std::allocator > const&) terminal2.cc: Terminal::Terminal(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, Symbol::Type) ~Terminal() Full name: Terminal::~Terminal() Source: destructor.cc Used By: data.cc: GLOBALS data.cc 12data.o bisonc++-4.13.01/block/0000755000175000017500000000000012633316117013344 5ustar frankfrankbisonc++-4.13.01/block/iddollar.cc0000644000175000017500000000065312633316117015451 0ustar frankfrank#include "block.ih" // $$ void Block::IDdollar(size_t lineNr, string const &text) { size_t begin = text.find('<') + 1; d_atDollar.push_back( AtDollar( AtDollar::DOLLAR, length(), lineNr, text, text.substr( begin, text.find('>') - begin ), numeric_limits::max() ) ); append(text); } bisonc++-4.13.01/block/clear.cc0000644000175000017500000000014312633316117014737 0ustar frankfrank#include "block.ih" void Block::clear() { erase(); d_atDollar.clear(); d_count = 0; } bisonc++-4.13.01/block/opfuncharp.cc0000644000175000017500000000022512633316117016017 0ustar frankfrank#include "block.ih" bool Block::operator()(string const &text) { if (d_count == 0) return false; *this += text; return true; } bisonc++-4.13.01/block/operatorinsert.cc0000644000175000017500000000042112633316117016730 0ustar frankfrank#include "block.ih" std::ostream &operator<<(std::ostream &out, Block const &blk) { out << '`' << static_cast(blk) << "'\n"; copy(blk.d_atDollar.rbegin(), blk.d_atDollar.rend(), ostream_iterator(out, "\n")); return out; } bisonc++-4.13.01/block/dollarindex.cc0000644000175000017500000000042712633316117016163 0ustar frankfrank#include "block.ih" // $-?NR or $-?NR. void Block::dollarIndex(size_t lineNr, string const &text, bool member) { d_atDollar.push_back( AtDollar(AtDollar::DOLLAR, length(), lineNr, text, stol(text.substr(1)), member) ); append(text); } bisonc++-4.13.01/block/dollar.cc0000644000175000017500000000055112633316117015131 0ustar frankfrank#include "block.ih" // called at @@, $$ or $$. in a block void Block::dollar(size_t lineNr, string const &text, bool member) { d_atDollar.push_back( AtDollar( text[0] == '$' ? AtDollar::DOLLAR : AtDollar::AT, length(), lineNr, text, numeric_limits::max(), member ) ); append(text); } bisonc++-4.13.01/block/open.cc0000644000175000017500000000073112633316117014615 0ustar frankfrank#include "block.ih" void Block::open(size_t lineno, string const &source) { if (d_count) // existing block ? *this += "{"; // add open curly bracket to the block's code else { // assign line if no braces were open yet clear(); this->string::operator=("{"); d_line = lineno; d_source = source; } ++d_count; // here, as clear() will reset d_count } bisonc++-4.13.01/block/block.h0000644000175000017500000000617412633316117014617 0ustar frankfrank#ifndef _INCLUDED_BLOCK_ #define _INCLUDED_BLOCK_ #include #include #include #include "../atdollar/atdollar.h" class Block: private std::string { friend std::ostream &operator<<(std::ostream &out, Block const &blk); size_t d_line; std::string d_source; // the source in which the block // was found. The block's text itself // is in the Block's base class int d_count; // curly braces nesting count, handled // by clear(), close(), and open() std::vector d_atDollar; // @- and $-specifications public: Block(); using std::string::empty; using std::string::find; using std::string::find_first_not_of; using std::string::find_first_of; using std::string::find_last_of; using std::string::insert; using std::string::length; using std::string::operator[]; using std::string::replace; using std::string::substr; void clear(); // clears the previous block contents void open(size_t lineno, std::string const &source); bool close(); void dollar(size_t lineNr, std::string const &matched, // $$ or $$. bool member); void atIndex(size_t lineNr, std::string const &matched); // @NR // $-?NR or $-?NR. void dollarIndex(size_t lineNr, std::string const &matched, bool member); void IDdollar(size_t lineNr, std::string const &matched); // $$ // $-?NR void IDindex(size_t lineNr, std::string const &matched); void operator+=(std::string const &text); operator bool() const; // return true if a block is active // add text if a block is active, bool operator()(std::string const &text); // returns true if active std::vector::const_reverse_iterator rbeginAtDollar() const; std::vector::const_reverse_iterator rendAtDollar() const; size_t line() const; std::string const &source() const; // the block's source file std::string const &str() const; // the block's contents }; inline Block::Block() : d_count(0) {} inline void Block::operator+=(std::string const &text) { append(text); } inline Block::operator bool() const { return d_count; } inline std::vector::const_reverse_iterator Block::rbeginAtDollar() const { return d_atDollar.rbegin(); } inline std::vector::const_reverse_iterator Block::rendAtDollar() const { return d_atDollar.rend(); } inline size_t Block::line() const { return d_line; } inline std::string const &Block::source() const { return d_source; } inline std::string const &Block::str() const { return *this; } #endif bisonc++-4.13.01/block/idindex.cc0000644000175000017500000000055012633316117015277 0ustar frankfrank#include "block.ih" void Block::IDindex(size_t lineNr, string const &text) { size_t begin = text.find('<') + 1; size_t end = text.find('>'); d_atDollar.push_back( AtDollar( AtDollar::DOLLAR, length(), lineNr, text, text.substr(begin, end - begin), stol(text.substr(end + 1)) ) ); append(text); } bisonc++-4.13.01/block/atindex.cc0000644000175000017500000000040512633316117015306 0ustar frankfrank#include "block.ih" // @NR void Block::atIndex(size_t lineNr, string const &text) { d_atDollar.push_back( AtDollar( AtDollar::AT, length(), lineNr, text, stol(text.substr(1)), false ) ); append(text); } bisonc++-4.13.01/block/frame0000644000175000017500000000005612633316117014362 0ustar frankfrank#include "block.ih" void Block::() const { } bisonc++-4.13.01/block/block.ih0000644000175000017500000000017212633316117014760 0ustar frankfrank#include "block.h" #include #include #include #include using namespace std; bisonc++-4.13.01/block/close.cc0000644000175000017500000000013212633316117014754 0ustar frankfrank#include "block.ih" bool Block::close() { *this += "}"; return --d_count == 0; } bisonc++-4.13.01/build0000755000175000017500000001137012634743426013311 0ustar frankfrank#!/usr/bin/icmake -qt/tmp/bisonc++ #include "icmconf" string g_logPath, g_cwd = chdir(""); // initial working directory, ends in / int g_echo = ON; #include "icmake/cuteoln" #include "icmake/backtick" #include "icmake/setopt" #include "icmake/run" #include "icmake/md" #include "icmake/special" #include "icmake/precompileheaders" #include "icmake/pathfile" #include "icmake/findall" #include "icmake/loginstall" #include "icmake/logrecursive" #include "icmake/logzip" #include "icmake/logfile" #include "icmake/uninstall" #include "icmake/clean" #include "icmake/manpage" #include "icmake/manual" #include "icmake/github" #include "icmake/destinstall" #include "icmake/install" void main(int argc, list argv) { string option; string strip; int idx; for (idx = listlen(argv); idx--; ) { if (argv[idx] == "-q") { g_echo = OFF; argv -= (list)"-q"; } else if (argv[idx] == "-P") { g_gch = 0; argv -= (list)"-P"; } else if (strfind(argv[idx], "LOG:") == 0) { g_logPath = argv[idx]; argv -= (list)g_logPath; g_logPath = substr(g_logPath, 4, strlen(g_logPath)); } } echo(g_echo); option = argv[1]; if (option == "clean") clean(0); if (option == "distclean") clean(1); if (option != "") special(); if (option == "install") install(argv[2], argv[3]); if (option == "uninstall") uninstall(argv[2]); if (option == "github") github(); if (option == "man") manpage(); if (option == "manual") manual(); if (option == "library") { precompileHeaders(); system("icmbuild library"); exit(0); } if (argv[2] == "strip") strip = "strip"; if (option == "program") { precompileHeaders(); system("icmbuild program " + strip); exit(0); } if (option == "oxref") { precompileHeaders(); system("icmbuild program " + strip); run("oxref -fxs tmp/lib" LIBRARY ".a > " PROGRAM ".xref"); exit(0); } if (option == "scanner") { chdir("scanner"); system("flexc++ -i scanner.ih lexer"); chdir(".."); system("icmbuild program " + strip); exit(0); } printf("Usage: build [-q] what\n" "Where\n" " [-q]: run quietly, do not show executed commands\n" "`what' is one of:\n" " clean - clean up remnants of previous " "compilations\n" " distclean - clean + fully remove tmp/\n" " library - build " PROGRAM "'s library\n" " man - build the man-page (requires Yodl)\n" " manual - build the manual (requires Yodl)\n" " program [strip] - build " PROGRAM " (optionally strip the\n" " executable)\n" " oxref [strip] - same a `program', also builds xref file\n" " using oxref\n" " scanner [strip] - build new scanner, then 'build program'\n" " install [LOG:path] selection [base] -\n" " to install the software in the locations " "defined \n" " in the INSTALL.im file, optionally below " "base.\n" " LOG:path is optional: if specified `path' " "is the\n" " logfile on which the installation log is " "written.\n" " selection can be\n" " x, to install all components,\n" " or a combination of:\n" " a (additional documentation),\n" " b (binary program),\n" " d (standard documentation),\n" " m (man-pages)\n" " s (skeleton files)\n" " u (user guide)\n" " uninstall logfile - remove files and empty directories listed\n" " in the file 'logfile'\n" " github - prepare github's gh-pages update\n" " (internal use only)\n" "\n" ); exit(0); } bisonc++-4.13.01/changelog0000644000175000017500000016637312634777410014154 0ustar frankfrankbisonc++ (4.13.01) * slightly modified the icmake build scripts to prevent multiple inclusions of some of the installed files. -- Frank B. Brokken Fri, 18 Dec 2015 13:41:08 +0100 bisonc++ (4.13.00) * 'build install' supports an optional LOG: argument: the (relative or absolute) path to a installation log file. The environment variable BISONCPP is no longer used. * Updated the usage info displayed by `./build', altered the procedure to install the files at their final destinations. * Following a suggestion made by gendx the polymorphic class Semantic now defines a variadic template constructor, allowing Semantic objects (and thus SType::emplace) to be initialized (c.q. called) using any set of argument types that are supported by Semantic's DataType. Also, the (internally used) classes HasDefault and NoDefault are now superfluous and were removed (from skeletons/bisonc++polymorphic and skeletons/bisonc++polymorphic.inline). * Adapted the icmake build files to icmake 8.00.04 -- Frank B. Brokken Sun, 13 Dec 2015 16:29:41 +0100 bisonc++ (4.12.03) * Kevin Brodsky observed that the installation scripts used 'chdir' rather than 'cd'. Fixed in this release. * Kevin Brodsky also observed that the combined size of all precompiled headers might exceed some disks capacities. The option -P was added to the ./build script to prevent the use of precompiled headers. -- Frank B. Brokken Mon, 05 Oct 2015 20:17:56 +0200 bisonc++ (4.12.02) * Refined the 'build uninstall' procedure -- Frank B. Brokken Sun, 04 Oct 2015 16:27:18 +0200 bisonc++ (4.12.01) * The implementation of the ./build script was improved. -- Frank B. Brokken Thu, 01 Oct 2015 18:41:45 +0200 bisonc++ (4.12.00) * Added 'build uninstall'. This command only works if, when calling one of the 'build install' alternatives and when calling 'build uninstall' the environment variable BISONCPP contains the (preferably absolute) filename of a file on which installed files and directories are logged. Note that 'build (dist)clean' does not remove the file pointed at by the BISONCPP environment variable, unless that file happpens to be in a directory removed by 'build (dist)clean'. See also the file INSTALL. Defining the BISONCPP environment variable as ~/.bisoncpp usually works well. * Guillaume Endignoux offered several valuable suggestions: - Classes may not have default constructors, but may still be used if the default $$ = $1 action is not used. This can now be controlled using option --no-default-action-return (-N). When this option is specified the default $$ = $1 assignment of semantic values when returning from an action block isn't provided. When this option is specified then Bisonc++ generates a warning for typed rules (non-terminals) whose action blocks do not provide an explicit $$ return value. - To assign a value to $$ a member `emplace' is provided, expecting the arguments of the type represented by $$. - In cases where a $x.get() cannot return a reference to a value matching tt(Tag::NAME) and the associated type does not provide a default constructor this function throws an exception reporting STYPE::get: no default constructor available * Bisonc++'s documentation about using polymorphic values was modified, in particular the information about how the various polymorphic values can be assigned and retrieved. -- Frank B. Brokken Tue, 29 Sep 2015 11:48:18 +0200 bisonc++ (4.11.00) * Cleanup of the manual, in particular how lexical scanners can access the various values of polymorphic semantic value types (cf. section `Polymorphism and multiple semantic values'). The man-page was modified accordingly. * The manual-stamp file is no longer used. Calling 'build manual' now always (re)builds the manual. The same holds true for 'build man'. * The in version 4.08.00 removed const members were reinstalled, as they are required in situations where, e.g., a function defines an STYPE__ const * parameter. * Added 'build uninstall'. This command only works if, when calling one of the 'build install' alternatives and when calling 'build uninstall' the environment variable BISONCPP contains the (preferably absolute) filename of a file on which installed files and directories are logged. Note that 'build (dist)clean' does not remove the file pointed at by the BISONCPP environment variable, unless that file happpens to be in a directory removed by 'build (dist)clean'. See also the file INSTALL. Defining the BISONCPP environment variable as ~/.bisoncpp usually works well. * The INSTALL file was updated to the current state of affairs. * Removed the file parser/reader, which contained code generated by bison. It was nowhere used and I simply couldn't see why it was added to the parser's directory at all. * Removed the file 'distribution' from this directory's parent directory. It is not used, and was superseded by the file sourcetar (both files are Internal Use Only). * Removed the file documentation/bison.ps.org/bison.ps.gz: it harbored an compression error (already at the very first bisonc++ release), and the bison documentation in html format remains part of the bisonc++ distribution. -- Frank B. Brokken Sun, 30 Aug 2015 11:13:57 +0200 bisonc++ (4.10.01) * Production rules of non-terminal symbols that immediately follow dot positions of existing items are added as new (implied) items to that state's set of items. The --construction option no longer shows the indices of such newly added items as this information can easily be obtained from the provided construction output. -- Frank B. Brokken Sun, 17 May 2015 16:54:13 +0200 bisonc++ (4.10.00) * FOLLOW sets are not used when analyzing LALR(1) grammars. The class FollowSet and all operations on follow sets were removed. * The LA set computation algorithm was reimplemented, a description of the new algorithm is included in the manual and in several source files, in particular state/determinelasets.cc. Both the state items' LA computation and the LA propagation algorithms were completely reimplemented. * Rules causing conflicts (i.e., conflict remaining after processing %left, %right, %nonassoc and/or %expect) as wel as the involved LA characters re briefly mentioned immediately following the SR/RR conflict-counts. * The class Symtab now uses an unordered_map rather than a mere (ordered) map. * The class-dependency diagram (README.class-setup) was updated to reflect the latest changes. Same for the file CLASSES which is used by the build script. * Added the file `required' listing the non-standard software that is required to build bisonc++ and its user guide / man-page -- Frank B. Brokken Sun, 17 May 2015 13:13:55 +0200 bisonc++ (4.09.02) * Wilko Kuiper reported an annoying bug in the skeleton lex.in, causing the compilation of parser.hh to fail. This release fixes that bug. -- Frank B. Brokken Mon, 28 Jul 2014 16:46:35 +0200 bisonc++ (4.09.01) * $#$#@ !! Forgot to update the help-info (bisonc++ --help) to reflect the new -D option. Now fixed. -- Frank B. Brokken Sun, 11 May 2014 09:05:48 +0200 bisonc++ (4.09.00) * Added option --no-decoration (-D), suppressing the actions that are normally associated with matched rules. -- Frank B. Brokken Sat, 10 May 2014 11:58:46 +0200 bisonc++ (4.08.01) * Members of the class `Generator' generating a substantial amount of code now read skeleton files instead of strings which are defined in these functions' bodies. * Added new skeleton files for the abovementioned functions. The names of these skeleton files are identical to the matching filenames in generator/, but use extensions `.in' -- Frank B. Brokken Mon, 31 Mar 2014 11:45:41 +0200 bisonc++ (4.08.00) * std::shared_ptr doesn't slice: virtual ~Base() and dynamic_casts removed from the generated parserbase.h files * %polymorphicimpl removed from skeleton/bisonc++, matching files from Generator * The implementation of polymorphic semantic values was simplified. Const members were removed from polymorphic semantic value classes; ReturnType get() const and ReturnType data() const are no longer required and were removed. -- Frank B. Brokken Sun, 02 Mar 2014 11:51:38 +0100 bisonc++ (4.07.02) * Changed 'class SType into struct SType in skeletons/polymorphic, since all its members are public anyway. * Class header and class implementation header files are no longer overwritten at bisonc++ runs. * Running './build program' no longer by default uses -g (see INSTALL.im) -- Frank B. Brokken Mon, 17 Feb 2014 13:43:16 +0100 bisonc++ (4.07.01) * Fixed segfaults (encountered with 4.07.00) caused by for-statement range specification errors. -- Frank B. Brokken Sun, 16 Feb 2014 15:46:45 +0100 bisonc++ (4.07.00) * Generating files is prevented when errors in option/declaration specifications are encountered. All errors in option/declaration specifications (instead of just the first error that is encountered) are now reported. -- Frank B. Brokken Fri, 14 Feb 2014 14:53:33 +0100 bisonc++ (4.06.00) * Repaired buggy handling of some options/directives * Prevented spurious option warnings sometimes generated when options aren't specified * Action blocks associated with rules may contain raw string literals. -- Frank B. Brokken Sun, 09 Feb 2014 11:31:14 +0100 bisonc++ (4.05.00) * Added the directive %scanner-class-name specifying the class name of the scanner to use in combination with the %scanner directive * re-installed the --namespace option (see the next item) * Warnings are issued when options or directives are specified wchich are ignored because the target file (e.g., parser.h, parser.hh) already exists. These warnings are not issued for parse.cc and parserbase.h, which are by default rewritten at each bisonc++ run. These warnings are issued for the `class-header', `class-name', `baseclass-header', `namespace', `scanner', `scanner-class-name' and `scanner-token-function' options/directives. * The --force-class-header and --force-implementation-header options were removed: 'rm ...' should be used instead. * man-page and manual updated * CLASSES class dependencies updated, icmconf's USE_ALL activated -- Frank B. Brokken Sat, 10 Aug 2013 10:16:17 +0200 bisonc++ (4.04.01) * Removed the possibility to specify path names for the --baseclass-header, --class-header, --implementation-header, and --parsefun-source options (and corresponding directives). Path names for generated files should be specified using the target-directory option or directive. -- Frank B. Brokken Mon, 27 May 2013 12:12:58 +0200 bisonc++ (4.04.00) * Repaired %target-directory not recognizing path-characters and not removing surrounding "-delimiters. * The --baseclass-header, --class-header, --implementation-header, and --parsefun-source options (and corresponding directives) now also accept path-specifications. * The man-page and manual have been updated accordingly. -- Frank B. Brokken Sun, 26 May 2013 14:22:50 +0200 bisonc++ (4.03.00) * Bisonc++ before 4.03.00 failed to notice the --debug option. Now repaired. * Added the rpn example to the manual, and repaired typos in the manual. * Options/directives that can only accept file names (like --baseclass-header) no longer accept path names. -- Frank B. Brokken Sun, 31 Mar 2013 11:25:49 +0200 bisonc++ (4.02.01) * Parser-class header files (e.g., Parser.h) and parser-class internal header files (e.g., Parser.hh) generated with bisonc++ < 4.02.00 require two hand-modifications when used in combination with bisonc++ >= 4.02.00: In Parser.h, just below the declaration void print__(); add: void exceptionHandler__(std::exception const &exc); In Parser.hh, assuming the name of the generated class is `Parser', add the following member definition (if a namespace is used: within the namespace's scope): inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } This function may be re-implemented, see the man-page for further details. -- Frank B. Brokken Mon, 11 Mar 2013 16:50:26 +0100 bisonc++ (4.02.00) * Added member Parser::exceptionHandler(std::exception const &exc), handling std::exceptions thrown from the parser's action blocks. * The --namespace option was removed, since it does not affect once generated parser.h files, resulting in inconsistent namespace definitions. * Include guards of parser.h and parserbase.h include the namespace identifier, if %namespace has been used. * Provided print()'s implementation in bisonc++.hh with a correct class-prifix (was a fixed Parser::) * Textual corrections of the man-page and manual. -- Frank B. Brokken Thu, 07 Mar 2013 09:57:07 +0100 bisonc++ (4.01.02) * Bisonc++ returns 0 for options --help and --version * Catching Errno exceptions is replaced by catching std::exception exceptions -- Frank B. Brokken Thu, 24 Jan 2013 08:14:59 +0100 bisonc++ (4.01.01) * The following #defines in INSTALL.im can be overruled by defining identically named environment variables: CXX defines the name of the compiler to use. By default `g++' CXXFLAGS the options passed to the compiler. By default `-Wall --std=c++0x -O2 -g' LDFLAGS the options passed to the linker. By default no options are passed to the linker. SKEL the directory where the skeletons are stored -- Frank B. Brokken Sun, 15 Jul 2012 14:44:46 +0200 bisonc++ (4.01.00) * Repaired a long-existing bug due to which some S/R conflicts are solved by a reduce, where a shift should have been used. See README.states-and-conflicts for details. * Removed line-numbers from final warning/error messages * This version requires Bobcat >= 3.00.00 -- Frank B. Brokken Thu, 03 May 2012 21:21:47 +0200 bisonc++ (4.00.00) * Implemented the %polymorphic directive. Bisonc++ itself uses %polymorphic to implement its own polymorphic semantic values; man-page and manual extended with sections about polymorphic semantic values. * Implemented the %weak-tags directive. By default %polymorphic declares an `enum class Tag__', resulting in strongly typed polymorphic tags. If the traditional tag declaration is preferred, the %weak-tags directive can be specified in addition to %polymorphic, resulting in the declaration `enum Tag__'. * The previously used class spSemBase and derivatives (e.g., SemBase, Semantic) are now obsolete and the directories sembase and spsembase implementing these classes were removed. * The Parser's inline functions are all private and were moved to the parser's .hh file. This doesn't affect current implementations, as parser.h and parser.hh are only generated once, but newly generated parsers no longer define the Parser's inline members (error, print__, and lex) in parser.h but in parser.hh * @@ can be used (instead of d_loc__) to refer to a rule's location stack value. * The generated parser now uses unordered_maps instead of maps. -- Frank B. Brokken Fri, 13 Apr 2012 14:10:12 +0200 bisonc++ (3.01.00) * The `%print-tokens' directive was accidentally omitted from 3.00.00. Repaired in this release. * Starting this release all release tags (using names that are identical to the version number, so for this release the tag is 3.01.00) are signed to allow authentication. -- Frank B. Brokken Mon, 27 Feb 2012 13:33:20 +0100 bisonc++ (3.00.00) * This release's scanner was built by flexc++ * Option handling was separated from parsing, following the method also used in flexc++: a class Options holds and maintains directives and options that are used in multiple points in bisonc++'s sources. The Parser passes directive specifications to set-functions defined by the class Options. * The parser's semantic value handling recevied a complete overhaul. Unions are no longer used; instead a light-weight polymorphic base class in combination with some template meta programming was used to handle the semantic values. See sembase/sembase.h for a description of the appproach. * Options and directives were rationalized/standardized. See the man-page for details. Grammar specification files should no longer use %print, but should either use %print-tokens or %own-tokens (or the equivalent command-line options). * NOTE: Existing Parser class interfaces (i.e. parser.h) must be (hand-) modified by declaring a private member void print__(); See the man-page and/or manual for details about print__. * All regression tests (in documentation/regression) are now expecting that flexc++ (>= 0.93.00) is available. -- Frank B. Brokken Mon, 20 Feb 2012 16:32:01 +0100 bisonc++ (2.09.04) * Replaced many for_each calls and lamda functions by range-based for-loops * Used g++-4.7 -- Frank B. Brokken Wed, 04 Jan 2012 12:26:01 +0100 bisonc++ (2.09.03) * Replaced all FnWrap* calls by lambda function calls * `build' script now recognizes CXXFLAGS and LDFLAGS for, resp. g++ and ld flags. Default values are set in INSTALL.im, as before. -- Frank B. Brokken Thu, 23 Jun 2011 10:06:02 +0200 bisonc++ (2.09.02) * Repaired flaws that emerged with g++ 4.6 -- Frank B. Brokken Mon, 02 May 2011 16:30:43 +0200 bisonc++ (2.9.1) * Documentation requires >= Yodl 3.00.0 -- Frank B. Brokken Wed, 10 Nov 2010 10:30:51 +0100 bisonc++ (2.9.0) * Changed Errno::what() call to Errno::why() * Removed dependencies on Msg, using Mstreams and Errno::open instead. Consequently, bisonc++ depends on at least Bobcat 2.9.0 -- Frank B. Brokken Sat, 30 Oct 2010 22:05:30 +0200 bisonc++ (2.8.0) * Grammars having states consisting of items in which a reduction from a (series of) non-terminals is indicated have automatically a higher precedence than items in which a shift is required. Therefore, in these cases the shift/reduce conflict is solved by a reduce, rather than a shift. See README.states-and-conflicts, srconflict/visitreduction.cc and the Bisonc++ manual, section 'Rule precedence' for examples and further information. These grammars are now showing S/R conflicts, which remained undetected in earlier versions of Bisonc++. The example was brought to my attention by Ramanand Mandayam (thanks, Ramanand!). * To the set of regression tests another test was added containing a grammar having two S/R conflicts resulting from automatically selecting reductions rather than shifts. This test was named 'mandayam'. * Output generated by --verbose and --construction now shows in more detail how S/R conflicts are handled. The Bisonc++ manual also received an additional section explaining when reduces are used with certain S/R conflicts. * Previously the documentation stated that --construction writes the construction process to stdout, whereas it is written to the same file as used by --verbose. This is now repaired. * The meaning/use of the data members of all classes are now described at the data members in all the classes' header files. -- Frank B. Brokken Sun, 08 Aug 2010 15:15:46 +0200 bisonc++ (2.7.0) * $-characters appearing in strings or character constants in action blocks no longer cause warnings about incorrect or negative $-indices * Repaired incorrect interpretation of '{' and '}' character constants in action blocks. * Added option --print (directive %print) displaying tokens received by the scanner used by the generated parser. * Added option --scanner-token-function (directive %scanner-token-function) specifying the name of the function called from the generated parser's lex() function. * The build script received an additional option: `build parser' can be used to force the reconstruction of parser/parse.cc and parser/parserbase.h -- Frank B. Brokken Wed, 31 Mar 2010 15:54:52 +0200 bisonc++ (2.6.0) * Reorganized Bisonc++'s classes: public inheritance changed to private where possible, all virtual members now private. The parser->parserbase inheritance remains as-is (public) because parserbase essentially is a element of parser, defining types and the token-enum that must also be available to the users of the generated parser class. The alternative, defining types and tokens in the parser class would make it impossible to adapt the tokens without rewriting the parser class. Another alternative, defining the types and enum in a separate namespace imposes further restrictions on the users of the parser class, which is also considered undesirable. Public inheritance is now only used by NonTerminal, Terminal, and Symbol as these classes override virtual functions either in Symbol or in Element and the derived classes must all be usable where their base classes are expected (in accordance with the LSP). bisonc++ (2.5.1) * Token values written to parserbase.h are (again) suppressed when their values exceed the previous token's value by 1. All values were shown because writer/insertToken erroneously didn't receive a size_t &lastTokenValue anymore, but a size_t lastTokenValue. * Removed Terminal's operator<< from the namespace std * Now using initializer_lists to initialize static data (generator/data.cc main.cc) -- Frank B. Brokken Mon, 08 Mar 2010 20:51:22 +0100 bisonc++ (2.5.0) * Renamed parser/spec/aux to parser/spec/auxiliary to avoid file/device confusion on some systems * Removed inclusions of superfluous bobcat/fnwrap1c headers * Replaced all FnWrap1c calls by FnWrap::unary * Added check for d_currentRule == 0 in rules/addproduction. d_currentRule could be a 0-pointer, in which case addproduction shouldn't do anything. d_currentRule is a 0-pointer in, e.g. the erroneous grammar submitted by Justin Madru about which he rightfully remarked that even though erroneous bisonc++ shouldn't crash on it. This is his stripped-down grammar: %token X x_list %% x_list: x_list X | X ; bisonc++ (2.4.8) * Recompilation using option --std=c++0x, required because of Bobcat's use of C++0x syntax. -- Frank B. Brokken Sat, 05 Sep 2009 17:25:56 +0200 bisonc++ (2.4.7) * Streams processed by an `#include' directive were not properly closed, resulting in a memory leak. The Scanner's code was modified to plug that leak. -- Frank B. Brokken Wed, 06 May 2009 09:36:02 +0200 bisonc++ (2.4.6) * Changed the build script to allow finer control over construction and installation of parts of the package -- Frank B. Brokken Tue, 24 Mar 2009 19:16:10 +0100 bisonc++ (2.4.5) * DateTime (generator/filter.cc) adapted due to change of Bobcat's DateTime interface bisonc++ (2.4.4) * typed terminal tokens (as used in, e.g., %type) were not included in the parserbase's Tokens__ enum since their symbol type is left at UNDETERMINED. tokens used in type lists can also be non-terminals, in which case their symbol type is changed accordingly. In 2.4.4. a symbol is selected for inclusion in the Tokens__ enum if it's a terminal token but also if it's a symbol that has been used in the grammar although its symbol type is left at UNDETERMINED (in generator/selectsymbolic.cc) bisonc++ (2.4.3) * repaired segfault generated when the name of a non-existing file was passed to bisonc++ bisonc++ (2.4.2) * scanner/yylex.cc removed from the distribution: flex will create a new Scanner::yylex() member at each new distribution to prevent incompatibilities between earlier yylex.cc and later FlexLexer.h files. bisonc++ (2.4.1) * Implemented minor changes related to requirements imposed upon the code by g++ 4.3. * Generator/filter now uses the Datetime::rfc2822(), implmented since Bobcat 1.17.1 bisonc++ (2.4.0) * Fixed missing entry in multiple reduction state tables: State tables of multiple reduction states (e.g., REQ_RED states) were constructed incompletely. E.g., for the grammar: expr: name | ident '(' ')' | NR ; name: IDENT ; ident: IDENT ; the state following IDENT is either a reduce to 'name' or 'ident': the corresponding table was filled incompletely, using the number of the next token where the next token for the reduction should have been be mentioned, and an empty field in the table itself. NOTE that in these situations d_val__ MUST be set by the scanner, as the reduction requires another token, and only once that token is available the reduction to, e.g., 'ident' can be performed, but at that time YYText() has already gone and is inaccessible in an action block like: ident: IDENT { $$ = d_scanner.YYText(); } ; * The error recovery procedure in skeleton's bisonc++.cc skeleton file was reimplemented. As a side effect the internally used function 'ParserBase::checkEOF()' could be removed. * #line directives in rule action blocks now correctly identify the grammar specification file in which the action block was defined. * Extra empty line at the end of state transition tables were removed * Files generated by Bisonc++ report Bisonc++'s version and the file construction time (conform RFC 2822) as C++ comment in their first line. bisonc++ (2.3.1) * Fixed character returned in escaped constants. E.g., at '\'' the \ was returned instead of the ' character. * Implemented the default assignment of $1 to $$ at the begin of action rules. This required another Block member: saveDollar1(), called for nested blocks. The function saveDollar1() prepends the code to save $$ from $1 of the rule in which the nested block was defined. In share/bisonc++ the function executeAction() no longer saves the semantic value's TOS value as d_val__ but the rule's $1 value. * To allow extraction of the archive under Cygwin (Windows) the directory parser/spec/aux was renamed to parser/spec/auxiliary (as Windows can't handle files or directories named 'aux'). bisonc++ (2.3.0) * Dallas A. Clement uncovered a bug in handling semantic values, due to which semantic values of tokens returned by some grammars got lost. He intended to use a polymorphic semantic value class to pass different kinds of semantic values over from the scanner to the parser. This approach was the foundation of another regression test example, now added to the set of regression tests and described in Bisonc++'s manual. It will also appear as an annotated example in the C++ Annotations. Thanks, Dallas, for uncovering and reporting that bug. * Dallas also noted that mid-rule actions could not refer to semantic values of rule components that had already been seen by Bisonc++. This has been fixed in this release. Dallas, thanks again! * Earlier versions of Bisonc++ used the class OM (Output Mode) to define the way objects like (Non)Terminal tokens and (Non)Kernel Items were inserted into ostreams. Using OM did not result in the clarity of design I originally had in mind. OM is now removed, and instead relevant classes support a function `inserter()' allowing sources to specify (passing `inserter()' a pointer to a member function) what kind of insertion they need. For the Terminal class there is also a manipulator allowing sources to insert a insertion-member directly into the ostream. * New option: --insert-stype The generated parser will now also display semantic values when %debug is specified if the new command-line option --insert-stype is provided. Of course, in this case users should make sure that the semantic value is actually insertable (e.g., by providing an overloaded operator std::ostream &std::operator<<(std::ostream &out, STYPE__ const &semVal). bisonc++ (2.2.0) * Repaired a bug in parsing action blocks of rules appearing only in versions 2.1.0 and 2.0.0. In these versions compound statements defined within the action blocks result in bisonc++ erroneously reporting an error caused by bisonc++'s scanner (scanner/lexer) interpreting the closing curly brace as the end of the action block. * Repaired a flaw in terminal/terminal1.cc causing a segfault when using bisonc++ compiled with g++ 4.2.1 bisonc++ (2.1.0) * In the skeleton bisonc++.cc $insert 4 staticdata is followed by $insert namespace-open. Since `staticdata' defined s_out if %debug is requested, it could be defined outside of the user's namespace (defined by %namespace). Repaired by defining s_out after (if applicable) opening the namespace (in Generator::namespaceOpen(), called from $insert namespace-open). bisonc++ (2.0.0) * Rebuilt Bisonc++'s parser and scanner, creating Bisonc++'s parser from the file parser/grammar. Initially Bisonc++ 1.6.1 was used to create the Parser class header and parsing function. Once Bisonc++ 2.0.0 was available, the grammar file was split into various subfiles (see below) and Bisonc++ 2.0.0 was used to implement its own parsing function. As a direct consequence of using a grammar rather than a hand-implemented parsing function quite a few members of the Parser and Scanner class were reimplemented, new members were added and some could be removed. Parts of other classes (Rules, Block) were significantly modified as well. * Minor hand-modifications may be necessary with previously designed code using identifiers that are defined by the parser class generated by Bisonc++. The following names have changed: ------------------------------------------------------------------------- old name change into new name: Protected ------------------------------------------------------------------------- Parser::LTYPE Parser::LTYPE__ Parser::STYPE Parser::STYPE__ Parser::Tokens Parser::Tokens__ Parser::DEFAULT_RECOVERY_MODE Parser::DEFAULT_RECOVERY_MODE__ Yes Parser::ErrorRecovery Parser::ErrorRecovery__ Yes Parser::Return Parser::Return__ Yes Parser::UNEXPECTED_TOKEN Parser::UNEXPECTED_TOKEN__ Yes Parser::d_debug Parser::d_debug__ Yes Parser::d_loc Parser::d_loc__ Yes Parser::d_lsp Parser::d_lsp__ Yes Parser::d_nErrors Parser::d_nErrors__ Yes Parser::d_nextToken Parser::d_nextToken__ Yes Parser::d_state Parser::d_state__ Yes Parser::d_token Parser::d_token__ Yes Parser::d_val Parser::d_val__ Yes Parser::d_vsp Parser::d_vsp__ Yes ------------------------------------------------------------------------- The symbols marked `Protected' can only occur in classes that were derived from the parser class generated by Bisonc++. Unless you derived a class from the parser class generated by Bisonc++ these changes should not affect your code. The first three symbols may have been used in other classes as well (for an example now using LTYPE__ and STYPE__ see the file documentation/regression/location/scanner/scanner.h). Note that the only required modification in all these cases is to append two underscores to the previously defined identifiers. * The grammar file may now be split into several grammar specification files. The directive %include may be specified to include grammar files into other grammar files (much like the way C/C++'s #include preprocessor directive operates). Starting point of the grammar recognized by Bisonc++ 2.0.0 is the file parser/grammar, using subfiles in the parser/spec directory. The file README.parser documents the grammar specification files in some detail. * Previous releases implicitly enforced several restrictions on the identifiers used for the grammar's tokens. These restrictions resulted from name collisions with names defined in the parser's base class. While the restrictions cannot be completely resolved without giving up backward compatibility, they can be relieved greatly. Tokens cannot be ABORT, ACCEPT, ERROR, clearin, debug, error and setDebug. Furthermore, tokens should not end in two underscores (__). * Implemented various new options and directives: - the option --analyze-only, merely analyzing the provided grammar, not writing any source or header files. - the option --error-verbose as well as the directive %error-verbose dumping the state-stack when a syntactic error is reported. - the option --include-only, catenating all grammar files in their order of processing to the standard output stream (and terminating). - the option --max-inclusion-depth, defining the maximum number of nested grammar files (default: 10). - the option --required-tokens (also available as the directive %required-tokens). Error recovery is now configurable in the sense that a configurable number of tokens must have been successfully processed before new error messages can be generated (see documentation/manual/error/intro.yo) - the option --scanner-debug writing the contents and locations (in scanner/lexer) of matched regular expresions as well as the names/values of returned tokens to the standard error stream. - the option --skeleton-directory. This option overrides the default location of the director containing the skeleton files. In turn it is overridden by the options specifying specific skeleton files (e.g., --baseclass-skeleton). - the option --thread-safe. If specified, Bisonc++ will generate code that is thread-safe. I.e., no static data will be modified by the parse() function. As a consequence, all static data in the file containing the parse() function are defined as const. Manpage and manual adapted accordingly. * As a convenience, filenames in the grammar files may optionally be surrounded by double quotes ("...") or pointed brackets <...>. Delimiting pointed brackets are only kept with the %scanner and %baseclass-preinclude directives, in all other cases they are replaced by double quotes and a warning is displayed. * Token Handling in the generated parsing member function was improved: the share/bisonc++.cc skeleton now defines pushToken__() and popToken__() as the standard interface to the tiny two-element token stack. The member nextToken() was redesigned. * Documentation was extended and updated. The Bisonc++ manual now contains an extensive description of the grammar-analysis process as implemented in Bisonc++ (see documentation/manual/algorith.yo). All new options and directives, as well as the token-name requirements are covered by the man-page and by the manual. * Various other repairs and cosmetic changes: - The --construction option as implemented in Bisonc++ 1.6.1 showed the FIRST set where the FOLLOW set was labeled. Repaired: now the FOLLOW set is actually displayed. - The --show-filenames option now shows `(not requested)' as default for d_verboseName instead of `-' if no --verbose was requested. - The --construction option no longer displays the `LA expanded' value from the state's descriptions since it's always 0 - The --class-name option was not actually mentioned in the set of recognized long options: repaired. - The %type directive now allows you to specify semantic type associations of terminal tokens as well. - The %token, %left, %right and %nonassoc directives now all use the same syntax (as they always should have). These directives all define terminal tokens - Added `rebuild' command to the `build' script to recreate the parser bisonc++ (1.6.1) * Changed the error recovery procedure preventing stack underflows with unrecoverable input errors. * Added protected parser base class member checkEOF(), terminating the parse() member's activities (man-page adapted accordingly). * Changed small support member functions in the share/bisonc++.cc skeleton file into inline members, some were moved to share/bisonc++base.h * The skeleton files now use `\@' as baseclass-flag rather than `@'. The baseclass-flag is now defined as a static data member in Generator. This change prevents the `@' in e-mail addresses from being changed into the parser's class name. * Removed the class Support since it is now covered by Bobcat's (1.15.0) default implementation of FBB::TableSupport bisonc++ (1.6.0) * NOTE: THE PROTOTYPE OF THE PARSER'S lookup() FUNCTION CHANGED. IT IS NOW: int lookup(bool recovery); OLD parser.h HEADER FILES CAN BE REUSED AFTER ADDING THE PARAMETER bool recovery * Segfaults were caused by some grammars due to an erroneous index in (formerly state/setitems.cc) state/adddependents.cc, where idx, representing an index in d_itemVector was used to index an element in d_nextVector (for which the index variable `nextIdx' should have been used. Repaired. * For some unknown reason, priority and association rules were not implemented in earlier versions of bisonc++. Consequently priority and association values of rules were left at their default values. This was repaired by defining the function Rules::updatePrecedences(), which defines priorities of productions as either their values assigned by a %prec specification or as the priority of their first terminal token. * The accepting State no longer has default reductions. It doesn't need them since _EOF_ in those states terminates the parser. Accepting States now have search sentinels, allowing the parser to do proper error recovery. * The implementation of the shift/reduce algorithm and error handling in share/bisonc++.cc was modified, now using bitflags indicating state-requirements (e.g., requiring a token, having a default reduction, having an `error' continuation, etc.). Also, the functions nextToken() and lookup() were reimplemented. * The share/bisonc++.cc parser function skeleton's initial comment was improved. * The function state/state1.cc contained the superfluous intialization d_itemVector(0). The line has been removed. * The class `Rules' lacked facilities to detect properly whether a grammar file without rules was specified. Solved by defining a Rules constructor and an additional member indicating whether there weree any rules at all. * In grammar files, rules must now be terminated by a semicolon. Previous versions did not explicitly check this. Also, improperly formed character-tokens are now recognized as errors. * In correspondence with bison, the default verbose grammar output file is now called .output rather than .output * The description of the State information shown when --construction is specified was clarified. * The debug output generated by parse.cc was improved. * The setDebug() member is now a public member. Manual page and documentation changed accordingly. * The description of the setItems() algorithm in state/setItems was improved. * The `build' script was given an additional command (installprog) to install just the program and the skeletons. * Added several missing headers fixing gcc/g++ 4.3 problems -- Frank B. Brokken Mon, 09 Apr 2007 14:54:46 +0200 bisonc++ (1.5.3) * Using Bobcat's FnWrap* classes instead of Bobcat's Wrap* classes * The INSTALL.im file has received a (by default commented out) #define PROFILE. By activating this define, bisonc++ is compiled with support for the gprof profiler. This define should not be activated for production versions of bisonc++ * Not released. -- Frank B. Brokken Sat, 17 Feb 2007 20:44:19 +0100 bisonc++ (1.5.2) * It turns out that the modification in 1.5.1. is not necessary. The compilation problems mentioned there were the result of a presumed small g++ compiler bug. A workaround is implemented in Bobcat 1.12.1, preventing the bug from occurring. In fact, this release has the same contents as release 1.5.0. Release 1.5.1. can be considered obsolete. It is available from the svn repository only. -- Frank B. Brokken Thu, 30 Nov 2006 17:05:09 +0100 bisonc++ (1.5.1) * Building the usage support program failed because of implied dependencies on the bobcat library, resulting from superfluously including bobcat.h in the documentation/usage/usage.cc program source. This is solved by setting bisonc++.h's include guard identifier just before inserting ../../usage.cc in the documentation/usage/usage.cc program source. bisonc++ (1.5.0) * The algorithms for lookahead propagation and detection of grammars not deriving sentences have been redesigned and rewritten. The previously used algorithm propagating lookaheads suffered from spurious reduce/reduce conflicts for some grammars (e.g., see the one in documentation/regression/icmake1). Also, 1.4.0 choked on a (fairly) complex grammar like the one used by icmake V 7.00. These problems are now solved, and comparable problems should no longer occur. The layout and organization of the output has been changed as well. Now there are basically three forms of verbose output: No verbose output, in which the file *.output is not written, standard verbose output, in which an overview of the essential parts of the grammar is written to the file *.output, and --construction, in which case all lookaheadsets, as well as the first and follow sets are written to *.output. Multiple new classes were added, and some previously existing classes were removed. See the file README.class-setup and the file CLASSES for details. The man-page and manual were adapted on minor points. bisonc++ (1.4.0) * It turned out that in the previous versions, S/R conflicts were also produced for empty default reductions. Now detectSR() checks whether there is one empty reduction. If so, no S/R conflicts are possible in that state. Instead a SHIFT (which is already the default solution of a S/R conflict) is performed in these situations. So, now for all tokens for which a known continuation state exist the known continuation state is selected; for all other tokens the default reduction (reducing to its associated state) is selected. See state/detectsr.cc for details. Since the above change also represents a change of algorithm, the subversion was incremented. I added a sub-subversion to have a separate level of version-numbers for minor modifications. The documentation/regression/run script did not properly return to its initial working directory, and it called a test that no longer existed. Both errors have been repaired. Some leftover references to the Academic Free License were replaced by references to the GPL. The previously used scripts below make/ are obsolete and were removed from this and future distributions. Icmake should be used instead, for which a top-level script (build) and support scripts in the ./icmake/ directory are available. Icmake is available on a great many architectures. See the file INSTALL (and INSTALL.im, replacing the previously used INSTALL.cf) for further details. All plain `unsigned' variables were changed to `size_t' bisonc++ (1.03-1) unstable; urgency=low * License changed to the GNU GENERAL PUBLIC LICENSE. See the file `copyright'. According to the manual page, the debug-output generated by parsers created using the --debug option should be user-controllable through the `setDebug()' member. These feature is now actually implemented. The usage info now correctly shows the -V flag as a synonym for the --verbose option. From now on this file will contain the `upstream' changes. The Debian related changes are in changelog.Debian.gz -- Frank B. Brokken Wed, 19 Jul 2006 13:12:39 +0200 bisonc++ (1.02) unstable; urgency=low * Following suggestions made by George Danchev, this version was compiled by the unstable's g++ compiler (version >= 4.1), which unveiled several flaws in the library's class header files. These flaws were removed (i.e., repaired). In order to facilitate compiler selection, the compiler to use is defined in the INSTALL.cf file. The debian control-files (i.e., all files under the debian subdirectory) were removed from the source distribution, which is now also named in accordance with the Debian policy. A diff.gz file was added. -- Frank B. Brokken Thu, 6 Jul 2006 12:41:43 +0200 bisonc++ (1.01) unstable; urgency=low * Synchronized the version back to numbers-only, adapted the debian standards and the required bobcat library in the debian/control file. No implementation changes as compared to the previous version, but I felt the need to join various sub-sub-versions back to just one standard version. -- Frank B. Brokken Mon, 26 Jun 2006 12:11:15 +0200 bisonc++ (1.00a) unstable; urgency=low * Debian's Linda and lintian errors, warnings and notes processed. No messages are generated by linda and lintian in this version. -- Frank B. Brokken Sun, 28 May 2006 14:26:03 +0200 bisonc++ (1.00) unstable; urgency=low * Bisonc++ Version 1.00 has changed markedly as compared to its predecessor, bisonc++ 0.98.510. The main reason for upgrading to 1.00 following a year of testing the 0.98 series is that the grammar analysis and lookahead propagation algorithms as used in bisonc++ 0.98.510 were either too cumbersome and contained some unfortunate errors. The errors were discovered during my 2005-2006 C++ class, where some students produced grammars which were simple, but were incorrectly analyzed by bisonc++ 0.98. It turned out that the lookahead (LA) propagation contained several flaws. Furthermore, a plain and simple bug assigned the last-used priority to terminal tokens appearing literally in the grammar (i.e., without explicitly defining them in a %token or comparable directive). A simple, but potentially very confusing bug. At the cosmetic level, the information produced with the --construction option was modified, aiming at better legibility of the construction process. The `examples' directory was reduced in size, moving most examples to a new directory `regression', which now contains a script `run' that can be used to try each of the examples below the `regression' directory. Some of the examples call `bison', so in order to run those examples `bison' must be installed as well. It usually is. A minor backward IN-compatibility results from a change in prototype of some private parser member functions. This should only affect exising Parser.h header files. Simply replacing the `support functions for parse()' section shown at the end of the header file by the following lines should make your header file up-to-date again. Note that bisonc++ does not by itself rewrite Parser.h to prevent undoing any modifications you may have implemented in the parser-class header file: // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(); void nextToken(); Please note that this version depends on bobcat 1.7.1 or beyond. If you compile bobcat yourself, then you may want to know that bobcat's Milter and Xpointer classes are not used by bisonc++, so they could optionally be left out of bobcat's compilation. -- Frank B. Brokken Sun, 7 May 2006 15:10:05 +0200 bisonc++ (0.98.510) unstable; urgency=low * When no %union has been declared, no $$ warnings are issued anymore about non-exisiting types; When no %union has been declared a $i or $$ warning is issued about non-exisiting types. The State table (in the generated parse.cc file) containing `PARSE_ACCEPT' was created with a `REDUCE' indication for grammars whose start symbol's production rules were non-repetitive. This was repaired in state/writestatearray.cc by setting the (positive) non-reduce indication for states using shifts and/or the accept state. The logic in writeStateArray() was modifed: a separate ShiftReduce::Status variable is now used to store the possible actions: SHIFT, REDUCE or ACCEPT. The tables show `SHIFTS' if a state uses shifts; `ACCEPTS' if a state contains PARSE_ACCEPT; and `REDUCE' otherwise. -- Frank B. Brokken Tue, 21 Mar 2006 20:47:49 +0100 bisonc++ (0.98.500) unstable; urgency=low * Handling of $i and $$ repaired, added the %negative-dollar-indices directive. $ specifications were not properly parsed. Instead of $i or $$ constructions like $i and $$ were parsed, which is contrary to the manual's specification. The function parsing the $-values is defined in parser/handledollar.cc. The handling of negative $-indices is improved. Negative $-indices are used when synthesizing attributes. In that context, $0 is useful, since it refers to the nonterminal matched before the current rule is starting to be used, allowing rules like `vardef: typename varlist ' where `varlist' inherits the type specification defined at `typename'. In most situations indices are positive. Therefore bisonc++ will warn when zero or non-positive $-indices are seen. The %negative-dollar-indices directive may be used to suppress these warnings. $-indices exceeding the number of elements continue to cause an error. -- Frank B. Brokken Sun, 5 Mar 2006 13:59:08 +0100 bisonc++ (0.98.402) unstable; urgency=low * links against bobcat 1.6.0, using bobcat's new Arg:: interface -- Frank B. Brokken Mon, 26 Dec 2005 19:25:42 +0100 bisonc++ (0.98.400) unstable; urgency=low * state/writestatearray.cc adds {} around individual union values to allow warningless compilation of the generated parse.cc file by g++-4.0. bisonc++ is now itself too compiled by g++-4.0. -- Frank B. Brokken Fri, 18 Nov 2005 22:46:06 +0100 bisonc++ (0.98.007) unstable; urgency=low * Added a README.flex file giving some background information about the provided implementation of the lexical scanner (bisonc++/scanner/yylex.cc) Modified the compilation scripts: bisconc++/flex/FlexLexer.h is now included by default. This FlexLexer.h file is expected by bisonc++/scanner/yylex.cc and by the Scanner class. Simplified some compilation scripts. -- Frank B. Brokken Fri, 9 Sep 2005 11:42:24 +0200 bisonc++ (0.98.006) unstable; urgency=low * Removed the dependency on `icmake'. No change of functionality See the instructions in the `INSTALL' file when you want to compile and install `bisonc++' yourself, rather than using the binary (.deb) distribution. -- Frank B. Brokken Sat, 3 Sep 2005 17:42:29 +0200 bisonc++ (0.98.005) unstable; urgency=low * Removed the classes Arg, Errno, Msg and Wrap1, using the Bobcat library's versions of these classes from now on. No feature-changes. Added minor modifications to the `build' script. Annoying Error: The function `ItemSets::deriveAction()' did not recognize the `ACCEPT' action, so some (most ?) grammars could not be properly recognized. I applied a quick hack: if an action isn't `shift' or `reduce', it can be `accept', resulting in acceptance of the grammar. This solves the actual problem, but I'll have to insepct this in a bit more detail. For now, it should work ok. -- Frank B. Brokken Mon, 22 Aug 2005 13:05:28 +0200 bisonc++ (0.98.004) unstable; urgency=low * When new lookahead set elements are added to existing states, d_recheckState in itemsets/lookaheads.cc (ItemSets::checkLookaheads()) was reassigned to the state index whose lookaheadset was enlarged. However, if that happened for existing state `i' and then, during the same state-inspection, for state `j' where j > i, then the recheck would start at `j' rather than `i'. This problem was solved by giving d_recheckState only a lower value than its current value. With R/R conflicts involving `ACCEPT' reductions (with, e.g., `S_$: S .'), ACCEPT is selected as the chosen alternative. See State::setReduce() (state/setreduce.cc). Since this matches with the `first reduction rule' principle, it should be ok. %stype specifications may consist of multiple elements: the remainder of the line beyond %stype is interpreted as the type definition. The specification should (therefore) not contain comment or other characters that are not part of the actual type definition. The man-page is adapted accordingly. Same holds true for the %ltype directive Added a check whether the grammar derives a sentence (itemsets/derivesentence.cc). If not, a fatal error is issued. This happens at the end of the program's actions, and at this point files etc. have already been generated. They are kept rather than removed for further reference. Grammars not deriving sentences should probably not be used. The original Bison documentation has been converted to a Bisonc++ user guide. Furthermore, a html-converted manual page is now available under /usr/share/doc/bisonc++/man The `calculator' example used in the man-page is now under /usr/share/doc/bisonc++/man/calculator Bisonc++ is distributed under the Academic Free License, see the file COPYING in /usr/share/doc/bisonc++ -- Frank B. Brokken Sun, 7 Aug 2005 13:49:07 +0200 bisonc++ (0.98.003) unstable; urgency=low * Incomplete default State constructor now explicitly defined, prevents the incidental erroneous rapporting of conflicts for some states. -- Frank B. Brokken Thu, 26 May 2005 07:21:20 +0200 bisonc++ (0.98.002) unstable; urgency=low * The Wrap1 configurable unary predicate template class replaces various other templates (WrapStatic, Wrap, Pred1Wrap). No further usage or implementation changes/modifications. -- Frank B. Brokken Sun, 22 May 2005 15:27:19 +0200 bisonc++ (0.98.001) unstable; urgency=low * This is a complete rewrite of the former bisonc++ (0.91) version. The program bisonc++ is now a C++ program, producing C++ sources, using the algorithm for creating LALR-1 grammars as outlined by Aho, Sethi and Ullman's (1986) `Dragon' book. The release number will remain 0.98 for a while, and 0.98.001 holds the initial package, new style. Also see the man-page, since some things have been changed (augmented) since the previous version. No dramatic changes in the grammar specification method: Bisonc++ still uses bison's way to specify grammars, but some features, already obsolete in bisonc++ 0.91 were removed. Also note my e-mail address: the U. of Groningen's official policy now is to remove department specific information, so it's `@rug.nl' rather than `@rc.rug.nl', as used before. -- Frank B. Brokken Mon, 16 May 2005 13:39:38 +0200 bisonc++ (0.91) unstable; urgency=low * Added several missing short options (like -B etc) to the getopt() function call. I forgot to add them to the previous version(s). Internally, all old C style allocations were changed to C++ style allocations, using operators new and delete. Where it was immediately obvious that a vector could be used, I now use vectors. The internally used types `core' `shifts' and 'reductions' (types.h) now use a vector data member rather than an int [1] member, which is then allocated to its proper (I may hope) size when the structs are allocated. -- Frank B. Brokken Sat, 19 Feb 2005 10:21:58 +0100 bisonc++ (0.90) unstable; urgency=low * Command-line options now override matching declarations specified in the grammar specification file. All %define declarations have been removed. Instead their first arguments are now used to specify declarations. E.g., %parser-skeleton instead of %define parser-skeleton. All declarations use lower-case letters, and use only separating hyphens, no underscores. E.g., %lsp-needed rather than %define LSP_NEEDED The declaration %class-name replaces the former %name declaration All yy and YY name prefixes of symbols defined by bisonc++ have been removed. The parser-state `yydefault' has been renamed to `defaultstate'. -- Frank B. Brokken Sun, 6 Feb 2005 12:50:40 +0100 bisonc++ (0.82) unstable; urgency=low * Added d_nError as protected data member to the base class. Missed it during the initial conversion. d_nErrors counts the number of parsing errors. Replaces yynerrs from bison(++) -- Frank B. Brokken Sat, 29 Jan 2005 18:58:24 +0100 bisonc++ (0.81) unstable; urgency=low * Added the option --show-files to display the names of the files that are used or generated by bisonc++. -- Frank B. Brokken Fri, 28 Jan 2005 14:50:48 +0100 bisonc++ (0.80) unstable; urgency=low * Completed the initial debian release. No changes in the software. -- Frank B. Brokken Fri, 28 Jan 2005 14:30:05 +0100 bisonc++ (0.70-1) unstable; urgency=low * Initial Release. -- Frank B. Brokken Thu, 27 Jan 2005 22:34:50 +0100 bisonc++-4.13.01/documentation/0000755000175000017500000000000012633316117015123 5ustar frankfrankbisonc++-4.13.01/documentation/regression/0000755000175000017500000000000012633316117017303 5ustar frankfrankbisonc++-4.13.01/documentation/regression/nosentence/0000755000175000017500000000000012633316117021444 5ustar frankfrankbisonc++-4.13.01/documentation/regression/nosentence/parser/0000755000175000017500000000000012633316117022740 5ustar frankfrankbisonc++-4.13.01/documentation/regression/nosentence/parser/bgram0000644000175000017500000000004512633316117023752 0ustar frankfrank%token NR %% start: NR start ; bisonc++-4.13.01/documentation/regression/nosentence/parser/grammar0000644000175000017500000000004512633316117024310 0ustar frankfrank%token NR %% start: NR start ; bisonc++-4.13.01/documentation/regression/nosentence/doc0000644000175000017500000000030512633316117022132 0ustar frankfrankThis grammar has a single left-recursive rule, so no rule reduces to `start'. Consequently, no sentences can be derived from this grammar. Here is its sole production rule: start: NR start ; bisonc++-4.13.01/documentation/regression/mandayam/0000755000175000017500000000000012633316117021072 5ustar frankfrankbisonc++-4.13.01/documentation/regression/mandayam/parser/0000755000175000017500000000000012633316117022366 5ustar frankfrankbisonc++-4.13.01/documentation/regression/mandayam/parser/bgram0000644000175000017500000000402512633316117023402 0ustar frankfrank%debug %union { int ival; char cval; char *strval; float fval; } /* Keywords */ %token DEFINEtkn DEFINE_GROUPtkn %token LPARAtkn RPARAtkn LBRtkn RBRtkn COLONtkn SEMItkn COMMAtkn /* Arithmetic operators */ %token EQtkn %left PLUStkn MINUStkn %left MULTtkn DIVtkn %right UNARY %token IDtkn %token INT_CONSTtkn %token FLOAT_CONSTtkn %token STRING_CONSTtkn %token TRUEtkn FALSEtkn %% file: collection ; collection: prefix LBRtkn statement_list RBRtkn | prefix LBRtkn RBRtkn ; prefix: IDtkn LPARAtkn param_list RPARAtkn | IDtkn LPARAtkn RPARAtkn ; param_list: param_list COMMAtkn parameter | parameter ; parameter: numeric_constant | string_or_named_constant | string_or_named_constant COLONtkn string_or_named_constant | boolean_constant ; statement_list: statement_list statement | statement ; statement: primitive_attribute | complex_attribute | definition | definition_group | collection ; definition: DEFINEtkn LPARAtkn enumerator COMMAtkn enumerator COMMAtkn enumerator RPARAtkn SEMItkn ; definition_group: DEFINE_GROUPtkn LPARAtkn enumerator COMMAtkn enumerator RPARAtkn SEMItkn ; primitive_attribute: IDtkn COLONtkn primitive_attribute_value | IDtkn COLONtkn primitive_attribute_value SEMItkn | IDtkn EQtkn primitive_attribute_value ; primitive_attribute_value: expr ; expr: expr PLUStkn term | expr MINUStkn term | term ; term: term MULTtkn primary | term DIVtkn primary | primary ; primary: LPARAtkn expr RPARAtkn | MINUStkn expr %prec UNARY | PLUStkn expr %prec UNARY | constant ; constant: boolean_constant | numeric_constant | string_or_named_constant ; enumerator: string_or_named_constant ; string_or_named_constant: STRING_CONSTtkn | IDtkn ; numeric_constant: INT_CONSTtkn | FLOAT_CONSTtkn ; boolean_constant: TRUEtkn | FALSEtkn ; complex_attribute: prefix SEMItkn | prefix ; bisonc++-4.13.01/documentation/regression/mandayam/parser/grammar0000644000175000017500000000402512633316117023740 0ustar frankfrank%debug %union { int ival; char cval; char *strval; float fval; } /* Keywords */ %token DEFINEtkn DEFINE_GROUPtkn %token LPARAtkn RPARAtkn LBRtkn RBRtkn COLONtkn SEMItkn COMMAtkn /* Arithmetic operators */ %token EQtkn %left PLUStkn MINUStkn %left MULTtkn DIVtkn %right UNARY %token IDtkn %token INT_CONSTtkn %token FLOAT_CONSTtkn %token STRING_CONSTtkn %token TRUEtkn FALSEtkn %% file: collection ; collection: prefix LBRtkn statement_list RBRtkn | prefix LBRtkn RBRtkn ; prefix: IDtkn LPARAtkn param_list RPARAtkn | IDtkn LPARAtkn RPARAtkn ; param_list: param_list COMMAtkn parameter | parameter ; parameter: numeric_constant | string_or_named_constant | string_or_named_constant COLONtkn string_or_named_constant | boolean_constant ; statement_list: statement_list statement | statement ; statement: primitive_attribute | complex_attribute | definition | definition_group | collection ; definition: DEFINEtkn LPARAtkn enumerator COMMAtkn enumerator COMMAtkn enumerator RPARAtkn SEMItkn ; definition_group: DEFINE_GROUPtkn LPARAtkn enumerator COMMAtkn enumerator RPARAtkn SEMItkn ; primitive_attribute: IDtkn COLONtkn primitive_attribute_value | IDtkn COLONtkn primitive_attribute_value SEMItkn | IDtkn EQtkn primitive_attribute_value ; primitive_attribute_value: expr ; expr: expr PLUStkn term | expr MINUStkn term | term ; term: term MULTtkn primary | term DIVtkn primary | primary ; primary: LPARAtkn expr RPARAtkn | MINUStkn expr %prec UNARY | PLUStkn expr %prec UNARY | constant ; constant: boolean_constant | numeric_constant | string_or_named_constant ; enumerator: string_or_named_constant ; string_or_named_constant: STRING_CONSTtkn | IDtkn ; numeric_constant: INT_CONSTtkn | FLOAT_CONSTtkn ; boolean_constant: TRUEtkn | FALSEtkn ; complex_attribute: prefix SEMItkn | prefix ; bisonc++-4.13.01/documentation/regression/mandayam/doc0000644000175000017500000000071712633316117021567 0ustar frankfrankThis grammar was submitted by Ramanand Mandayam on July 5, 2010. The grammar generates 2 shift/reduce conflicts resulting from forced removal of shiftable items (in State 2), favoring reductions. Also it uses (production rule) priorities (in state 11) to remove shiftable items in favor of reductions. The grammar is interesting in that it shows that a grammar may (by default) reduce rather than shift. See also section 'Rule Precedence' in Bisonc++'s manual. bisonc++-4.13.01/documentation/regression/icmake1/0000755000175000017500000000000012633316117020615 5ustar frankfrankbisonc++-4.13.01/documentation/regression/icmake1/parser/0000755000175000017500000000000012633316117022111 5ustar frankfrankbisonc++-4.13.01/documentation/regression/icmake1/parser/bgram0000644000175000017500000000010212633316117023115 0ustar frankfrank%token X %% run: one | two X ; one: ; two: one ; bisonc++-4.13.01/documentation/regression/icmake1/parser/grammar0000644000175000017500000000010212633316117023453 0ustar frankfrank%token X %% run: one | two X ; one: ; two: one ; bisonc++-4.13.01/documentation/regression/icmake1/doc0000644000175000017500000000020112633316117021276 0ustar frankfrankA highly reduced grammar derived from icmake V 7.00's grammar producing spurious RR conflicts with bisonc++ before version 1.5.0 bisonc++-4.13.01/documentation/regression/calculator/0000755000175000017500000000000012633316117021434 5ustar frankfrankbisonc++-4.13.01/documentation/regression/calculator/scanner/0000755000175000017500000000000012633316117023065 5ustar frankfrankbisonc++-4.13.01/documentation/regression/calculator/scanner/lexer0000644000175000017500000000040412633316117024125 0ustar frankfrank%interactive %filenames scanner %% [ \t]+ // skip white space \n return Parser::EOLN; [0-9]+ return Parser::NUMBER; . return matched()[0]; %% bisonc++-4.13.01/documentation/regression/calculator/scanner/scanner.ih0000644000175000017500000000007312633316117025040 0ustar frankfrank#include "scanner.h" #include "../parser/parserbase.h" bisonc++-4.13.01/documentation/regression/calculator/scanner/scanner.h0000644000175000017500000000203612633316117024670 0ustar frankfrank// Generated by Flexc++ V0.93.00 on Mon, 20 Feb 2012 12:42:32 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { public: explicit Scanner(std::istream &in = std::cin, std::ostream &out = std::cout); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; inline void Scanner::postCode(PostEnum__) {} // $insert scannerConstructors inline Scanner::Scanner(std::istream &in, std::ostream &out) : ScannerBase(in, out) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/regression/calculator/parser/0000755000175000017500000000000012633316117022730 5ustar frankfrankbisonc++-4.13.01/documentation/regression/calculator/parser/bgram0000644000175000017500000000226112633316117023744 0ustar frankfrank // lowest precedence %token NUMBER // integral numbers EOLN // newline %left '+' '-' %left '*' '/' %right UNARY // highest precedence %% expressions: expressions evaluate | prompt ; evaluate: alternative prompt ; prompt: { prompt(); } ; alternative: expression EOLN { cout << $1 << endl; } | 'q' done | EOLN | error EOLN ; done: { cout << "Done.\n"; ACCEPT(); } ; expression: expression '+' expression { $$ = $1 + $3; } | expression '-' expression { $$ = $1 - $3; } | expression '*' expression { $$ = $1 * $3; } | expression '/' expression { $$ = $1 / $3; } | '-' expression %prec UNARY { $$ = -$2; } | '+' expression %prec UNARY { $$ = $2; } | '(' expression ')' { $$ = $2; } | NUMBER { $$ = atoi(d_scanner.YYText()); } ; bisonc++-4.13.01/documentation/regression/calculator/parser/parser.h0000644000175000017500000000174112633316117024400 0ustar frankfrank// Generated by Bisonc++ V4.10.00 on Mon, 27 Apr 2015 13:10:17 +0200 #ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #undef Parser class Parser: public ParserBase { // $insert scannerobject Scanner d_scanner; public: int parse(); void prompt() { std::cout << "? "; } private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); }; #endif bisonc++-4.13.01/documentation/regression/calculator/parser/grammar0000644000175000017500000000235212633316117024303 0ustar frankfrank%filenames parser %scanner ../scanner/scanner.h // lowest precedence %token NUMBER // integral numbers EOLN // newline %left '+' '-' %left '*' '/' %right UNARY // highest precedence %% expressions: expressions evaluate | prompt ; evaluate: alternative prompt ; prompt: { prompt(); } ; alternative: expression EOLN { cout << $1 << '\n'; } | 'q' done | EOLN | error EOLN ; done: { cout << "Done.\n"; ACCEPT(); } ; expression: expression '+' expression { $$ = $1 + $3; } | expression '-' expression { $$ = $1 - $3; } | expression '*' expression { $$ = $1 * $3; } | expression '/' expression { $$ = $1 / $3; } | '-' expression %prec UNARY { $$ = -$2; } | '+' expression %prec UNARY { $$ = $2; } | '(' expression ')' { $$ = $2; } | NUMBER { $$ = atoi(d_scanner.matched().c_str()); } ; bisonc++-4.13.01/documentation/regression/calculator/parser/parser.ih0000644000175000017500000000157712633316117024560 0ustar frankfrank// Generated by Bisonc++ V4.10.00 on Mon, 27 Apr 2015 13:12:41 +0200 // Include this file in the sources of the class Parser. // $insert class.h #include "parser.h" inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::print() { print__(); // displays tokens if --print was specified } inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } // Add here includes that are only required for the compilation // of Parser's sources. // UN-comment the next using-declaration if you want to use // int Parser's sources symbols from the namespace std without // specifying std:: #include using namespace std; bisonc++-4.13.01/documentation/regression/calculator/doc0000644000175000017500000000054412633316117022127 0ustar frankfrank This is the regression installation of the calculator also found in documentation/man. The program implements a simple calculator, accepting unary and binary + and -, binary * and /, and nested expressions, using integral operands. To end the program, enter: q Errors are handled by skipping all information until the next end of line. bisonc++-4.13.01/documentation/regression/calculator/demo.cc0000644000175000017500000000103612633316117022667 0ustar frankfrank#include #include "parser/parser.h" using namespace std; int main(int argc, char **argv) { cout << "Enter (integral) expressions on separate lines, using " "+ - * / ()\n" "Other line content should result in a `syntax error'\n" "Blanks and tabs are ignored. Use ^c or ^d to end the program\n" "Use any program argument to view parser debug output\n"; Parser calculator; calculator.setDebug(argc > 1); return calculator.parse(); } bisonc++-4.13.01/documentation/regression/aho4.46/0000755000175000017500000000000012633316117020366 5ustar frankfrankbisonc++-4.13.01/documentation/regression/aho4.46/parser/0000755000175000017500000000000012633316117021662 5ustar frankfrankbisonc++-4.13.01/documentation/regression/aho4.46/parser/grammar0000644000175000017500000000014512633316117023233 0ustar frankfrank/* AHO et al, example 4.46 */ %token i %% S: L '=' R ; S: R ; L: '*' R ; L: i ; R: L ; bisonc++-4.13.01/documentation/regression/aho4.46/doc0000644000175000017500000000016512633316117021060 0ustar frankfrankThis grammar, given by AHO et al as grammar 4.46 (p. 241), has 10 states, no shift-reduce or reduce-reduce conflict. bisonc++-4.13.01/documentation/regression/polymorphic/0000755000175000017500000000000012633316117021650 5ustar frankfrankbisonc++-4.13.01/documentation/regression/polymorphic/dallas20000644000175000017500000001247512633316117023126 0ustar frankfrankFrom dallas.a.clement@gmail.com Tue Oct 9 00:29:53 2007 Received: from smtp1.rug.nl (smtp1.rug.nl [129.125.50.11]) by suffix.rc.rug.nl (8.14.1/8.14.1/Debian-9) with SMTP id l98MTrxe030265 for ; Tue, 9 Oct 2007 00:29:53 +0200 Received: from smtp1.rug.nl ([129.125.50.11]) by smtp1.rug.nl (SMSSMTP 4.1.0.19) with SMTP id M2007100900294623523 for ; Tue, 09 Oct 2007 00:29:46 +0200 Received: from mail3.rug.nl (mail3.rug.nl [129.125.50.14]) by smtp1.rug.nl (8.12.11.20060308/8.12.11) with ESMTP id l98MTkQa005320 for ; Tue, 9 Oct 2007 00:29:46 +0200 (MEST) Resent-Message-Id: <200710082229.l98MTkQa005320@smtp1.rug.nl> Received: from by mail3.rug.nl (CommuniGate Pro RULES 5.1.12) with RULES id 55899656; Tue, 09 Oct 2007 00:29:46 +0200 X-Autogenerated: Mirror Resent-From: Resent-Date: Tue, 09 Oct 2007 00:29:46 +0200 Received: from smtp2.rug.nl ([129.125.50.12] verified) by mail3.rug.nl (CommuniGate Pro SMTP 5.1.12) with SMTP id 55899655 for f.b.brokken@rug.nl; Tue, 09 Oct 2007 00:29:46 +0200 Received: from smtp2.rug.nl ([129.125.50.12]) by smtp2.rug.nl (SMSSMTP 4.1.11.41) with SMTP id M2007100900294526988 for ; Tue, 09 Oct 2007 00:29:45 +0200 Received: from py-out-1112.google.com (py-out-1112.google.com [64.233.166.182]) by smtp2.rug.nl (8.12.11.20060308/8.12.11) with ESMTP id l98MTiRH013056 for ; Tue, 9 Oct 2007 00:29:44 +0200 (MEST) Received: by py-out-1112.google.com with SMTP id f47so2710922pye for ; Mon, 08 Oct 2007 15:29:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:subject:from:reply-to:to:in-reply-to:references:content-type:organization:date:message-id:mime-version:x-mailer:content-transfer-encoding; bh=h3Gfp8lzdKEqJY0jAw7jqzoOPcybb3kAoQ3AYA6Vq3U=; b=GVlqjHLC5sLillOXZljVFfhCLivTZyduPh/m94ehJfz7ECbSCEVye8UQvi+WhbSBei+PFXiCC5AHsu0SKKZbzbHSCAyDmPWrB7SCmMZDaSJQkS0oi7h5IOQOUVjSNEGeYEqejjfI5i1M8nvHMJspCJvIt80RryoQgk5p8lGqkss= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:subject:from:reply-to:to:in-reply-to:references:content-type:organization:date:message-id:mime-version:x-mailer:content-transfer-encoding; b=ZN59LY0X3K/K0dycwmL5rRC1GNCTbVycF03b6F2HaiuGSUDzViwlAHVjL9Fgv5KcXGCDQc1zhu7FRk1kqqmapIMhrP0Xl46/Hth6+mi2AwshAsqe9CTcQIOzUX2zdXf3qcjw6clQcddRiqNdjjJFoFgxiREL4XJBMKMv+8y6TtE= Received: by 10.35.33.15 with SMTP id l15mr12560856pyj.1191882583426; Mon, 08 Oct 2007 15:29:43 -0700 (PDT) Received: from debian.local ( [70.250.157.38]) by mx.google.com with ESMTPS id y64sm7923418pyg.2007.10.08.15.29.41 (version=SSLv3 cipher=RC4-MD5); Mon, 08 Oct 2007 15:29:41 -0700 (PDT) Subject: Re: Small bisonc++ question From: Dallas Clement Reply-To: dallas.a.clement@gmail.com To: f.b.brokken@rug.nl In-Reply-To: <20071008221446.GD27245@rc.rug.nl> References: <20070919073747.GC17408@rc.rug.nl> <1191222419.3468.43.camel@localhost> <20071001190610.GA6195@rc.rug.nl> <1191278771.3605.15.camel@localhost> <20071003072550.GC29648@rc.rug.nl> <1191450509.3474.48.camel@localhost> <20071004064310.GB23564@rc.rug.nl> <1191505247.3505.15.camel@localhost> <20071005101005.GA25725@rc.rug.nl> <1191853606.3413.47.camel@localhost> <20071008221446.GD27245@rc.rug.nl> Content-Type: text/plain Organization: Clements Date: Mon, 08 Oct 2007 12:29:39 -0500 Message-Id: <1191864579.3413.58.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit X-Spam-Flag: NO X-Scanned-By: milter-spamc/1.4.366 (smtp1.rug.nl [129.125.50.11]); Tue, 09 Oct 2007 00:29:46 +0200 X-Scanned-By: milter-spamc/1.4.366 (smtp2.rug.nl [129.125.50.12]); Tue, 09 Oct 2007 00:29:45 +0200 X-Spam-Status: NO, hits=-7.00 required=4.00 X-Spam-Report: Spam detection software on "smtp1.rug.nl". Questions: postmaster@rug.nl Content analysis details: (-7.0 points, 4.0 required) USER_IN_WHITELIST=-7 ____ Status: RO Content-Length: 1238 Lines: 54 Frank, I've narrowed things down to one particular rule. It seems that the following rule with all of its alternatives is causing problems. scoped_identifier: TOK_IDENTIFIER { } | TOK_SCOPE_OPERATOR TOK_IDENTIFIER { } | scoped_identifier TOK_SCOPE_OPERATOR TOK_IDENTIFIER { } ; If I simplify this rule to the following, I do not have any problems with semantic values not being saved on the stack correctly. scoped_identifier: TOK_IDENTIFIER { } ; It seems that these extra alternatives are confusing the parser. Do you have any idea why this could be? Thanks, Dallas On Tue, 2007-10-09 at 00:14 +0200, Frank B. Brokken wrote: > Dear Dallas Clement, you wrote: > > > > Hello Frank, > > > > I just wanted to give you another update. I took the example that you > > provided and started tweaking it to resemble my grammar which manifested > > the problem I reported earlier. > > > > The good news is that I am unable to reproduce the problem, using the > > Base and SemVal classes you defined in the example. > > Apparently this e-mail and my answer to your previous one crossed each other > :-) > > Thanks for this e-mail, and good luck. Don't hesitate to call again! > > Cheers, > bisonc++-4.13.01/documentation/regression/polymorphic/scanner/0000755000175000017500000000000012633316117023301 5ustar frankfrankbisonc++-4.13.01/documentation/regression/polymorphic/scanner/lexer0000644000175000017500000000065112633316117024345 0ustar frankfrank// %interactive %filenames scanner // %debug %% [ \t]+ // Often used: skip white space \n // same const return Parser::CONST; [a-zA-Z_][a-zA-Z0-9_]* { *d_semval = new Ident(matched()); return Parser::IDENTIFIER; } . return matched()[0]; %% bisonc++-4.13.01/documentation/regression/polymorphic/scanner/scanner.ih0000644000175000017500000000007212633316117025253 0ustar frankfrank#include "../parser/parserbase.h" #include "scanner.h" bisonc++-4.13.01/documentation/regression/polymorphic/scanner/scanner.h0000644000175000017500000000212312633316117025101 0ustar frankfrank// Generated by Flexc++ V0.93.00 on Mon, 20 Feb 2012 13:48:09 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ #include "../semval/semval.h" // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { SemVal *d_semval; public: Scanner(SemVal *semval); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; inline void Scanner::postCode(PostEnum__) {} // $insert scannerConstructors inline Scanner::Scanner(SemVal *semval) : ScannerBase(std::cin, std::cout), d_semval(semval) {} // $insert inlineLexFunction inline int Scanner::lex() { return lex__(); } inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/regression/polymorphic/ident/0000755000175000017500000000000012633316117022753 5ustar frankfrankbisonc++-4.13.01/documentation/regression/polymorphic/ident/ident.ih0000644000175000017500000000005212633316117024375 0ustar frankfrank#include "ident.h" using namespace std; bisonc++-4.13.01/documentation/regression/polymorphic/ident/ident.h0000644000175000017500000000126212633316117024230 0ustar frankfrank#ifndef _INCLUDED_IDENT_ #define _INCLUDED_IDENT_ #include #include "../base/base.h" class Ident: public Base { std::string d_ident; public: Ident(std::string const &id); virtual Base *clone() const; string const &id() const; // directly access the name. virtual ostream &insert(ostream &os) const; }; inline Ident::Ident(std::string const &id) : d_ident(id) {} inline Base *Ident::clone() const { return new Ident(*this); // default CopyCons is ok. } inline string const &Ident::id() const { return d_ident; } inline ostream &Ident::insert(ostream &out) const { return out << d_ident; } #endif bisonc++-4.13.01/documentation/regression/polymorphic/dallas0000644000175000017500000001463412633316117023043 0ustar frankfrankFrom dallas.a.clement@gmail.com Tue Oct 9 01:20:26 2007 Received: from smtp1.rug.nl (smtp1.rug.nl [129.125.50.11]) by suffix.rc.rug.nl (8.14.1/8.14.1/Debian-9) with SMTP id l98NKQcR032365 for ; Tue, 9 Oct 2007 01:20:26 +0200 Received: from smtp1.rug.nl ([129.125.50.11]) by smtp1.rug.nl (SMSSMTP 4.1.0.19) with SMTP id M2007100901201923923 for ; Tue, 09 Oct 2007 01:20:19 +0200 Received: from mail3.rug.nl (mail3.rug.nl [129.125.50.14]) by smtp1.rug.nl (8.12.11.20060308/8.12.11) with ESMTP id l98NKI5a008704 for ; Tue, 9 Oct 2007 01:20:18 +0200 (MEST) Resent-Message-Id: <200710082320.l98NKI5a008704@smtp1.rug.nl> Received: from by mail3.rug.nl (CommuniGate Pro RULES 5.1.12) with RULES id 55901028; Tue, 09 Oct 2007 01:20:18 +0200 X-Autogenerated: Mirror Resent-From: Resent-Date: Tue, 09 Oct 2007 01:20:18 +0200 Received: from smtp2.rug.nl ([129.125.50.12] verified) by mail3.rug.nl (CommuniGate Pro SMTP 5.1.12) with SMTP id 55901027 for f.b.brokken@rug.nl; Tue, 09 Oct 2007 01:20:18 +0200 Received: from smtp2.rug.nl ([129.125.50.12]) by smtp2.rug.nl (SMSSMTP 4.1.11.41) with SMTP id M2007100901201828757 for ; Tue, 09 Oct 2007 01:20:18 +0200 Received: from py-out-1112.google.com (py-out-1112.google.com [64.233.166.178]) by smtp2.rug.nl (8.12.11.20060308/8.12.11) with ESMTP id l98NKG2I028040 for ; Tue, 9 Oct 2007 01:20:16 +0200 (MEST) Received: by py-out-1112.google.com with SMTP id f47so2729326pye for ; Mon, 08 Oct 2007 16:20:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:subject:from:reply-to:to:in-reply-to:references:content-type:organization:date:message-id:mime-version:x-mailer; bh=LD3uTVyoVRXS18cMbdAEH/t5RfGGUuSFs6pDmj1RxTc=; b=g0vH8DPCx3xG8SSKvAsd4Fw+E5FDBMFsf5KJSXGmIlR64OzH7Q/BxeRjntgyy/FVfcXhCId/6fvCEqOOOx9SLcZ+s1gWWMQr2R0Xsd8CpNP9vsCpfGTwkFw0UsHxWQG8+QTaooMqcdEB9C0qnmO9OJblnTStirnqtWNMxfz0gW0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:subject:from:reply-to:to:in-reply-to:references:content-type:organization:date:message-id:mime-version:x-mailer; b=Tkh422JxTs39Ghh5NhfwxU0szS4qz06ycc6wi7XKd6WfdyZHVC0ohGdo46UsTQ3qFdF4mk9naD3h9O1A3V0+oaHFT7a3AJag6owDD+lnc4310BTPSMj+DO6oZ/vksn2hDx4hJAERTyde35adMlqHiM0WdNIH6OJs79os9sCGS/0= Received: by 10.35.87.10 with SMTP id p10mr2102442pyl.1191885615791; Mon, 08 Oct 2007 16:20:15 -0700 (PDT) Received: from debian.local ( [70.250.157.38]) by mx.google.com with ESMTPS id w29sm8012125pyg.2007.10.08.16.20.12 (version=SSLv3 cipher=RC4-MD5); Mon, 08 Oct 2007 16:20:13 -0700 (PDT) Subject: Re: Small bisonc++ question From: Dallas Clement Reply-To: dallas.a.clement@gmail.com To: f.b.brokken@rug.nl In-Reply-To: <20071008221446.GD27245@rc.rug.nl> References: <20070919073747.GC17408@rc.rug.nl> <1191222419.3468.43.camel@localhost> <20071001190610.GA6195@rc.rug.nl> <1191278771.3605.15.camel@localhost> <20071003072550.GC29648@rc.rug.nl> <1191450509.3474.48.camel@localhost> <20071004064310.GB23564@rc.rug.nl> <1191505247.3505.15.camel@localhost> <20071005101005.GA25725@rc.rug.nl> <1191853606.3413.47.camel@localhost> <20071008221446.GD27245@rc.rug.nl> Content-Type: multipart/mixed; boundary="=-WkuJNQQGw+AXQ14kxZP8" Organization: Clements Date: Mon, 08 Oct 2007 13:20:10 -0500 Message-Id: <1191867610.3413.66.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 X-Spam-Flag: NO X-Scanned-By: milter-spamc/1.4.366 (smtp1.rug.nl [129.125.50.11]); Tue, 09 Oct 2007 01:20:19 +0200 X-Scanned-By: milter-spamc/1.4.366 (smtp2.rug.nl [129.125.50.12]); Tue, 09 Oct 2007 01:20:18 +0200 X-Spam-Status: NO, hits=-7.00 required=4.00 X-Spam-Report: Spam detection software on "smtp1.rug.nl". Questions: postmaster@rug.nl Content analysis details: (-7.0 points, 4.0 required) USER_IN_WHITELIST=-7 ____ Status: RO Content-Length: 2401 Lines: 129 --=-WkuJNQQGw+AXQ14kxZP8 Content-Type: text/plain Content-Transfer-Encoding: 7bit Hello Frank, I have been able to reproduce the mysterious problem I have been bothering you about this past many days. I can do it with a slight modification of the example code you provided. I have attached the modified grammar file. This grammar should be able to parse the following expression: const abc j = xyz; The output of the parsing should produce: type_specifier j scoped_identifier This is exactly what I would expect. However, if you were to uncomment the last two alternatives in the 'scoped_identifier' rule, you will observe a completely different output. Please uncomment as follows: scoped_identifier: IDENTIFIER { $$ = new Ident("scoped_identifier"); } | ':' IDENTIFIER { $$ = new Ident("scoped_identifier"); } | scoped_identifier ':' IDENTIFIER { $$ = new Ident("scoped_identifier"); } ; Now you will observe the following output when executed: type_specifier type_specifier scoped_identifier This is definitely not what I would expect. The identifier 'j' is definitely not a 'type_specifier'. This example reproduces the exact problem I have been struggling to explain to you this past several days. Now I may be doing something really stupid, but I'm not able to recognize what it is. What do you think? Thanks, Dallas --=-WkuJNQQGw+AXQ14kxZP8 Content-Disposition: attachment; filename=grammar Content-Type: text/plain; name=grammar; charset=utf-8 Content-Transfer-Encoding: 7bit %class-name Parser %filenames parser %parsefun-source parse.cc %scanner ../scanner/scanner.h %baseclass-preinclude preinclude.h %stype SemVal %token BOOL INT ENUM CONST IDENTIFIER %% constant_definition: CONST type_specifier IDENTIFIER '=' scoped_identifier ';' { cout << $2 << " " << $3 << " " << $5 << endl; } ; type_specifier: BOOL { $$ = new Ident("type_specifier"); } | INT { $$ = new Ident("type_specifier"); } | scoped_identifier { $$ = new Ident("type_specifier"); } ; scoped_identifier: IDENTIFIER { $$ = new Ident("scoped_identifier"); } /* | ':' IDENTIFIER { $$ = new Ident("scoped_identifier"); } | scoped_identifier ':' IDENTIFIER { $$ = new Ident("scoped_identifier"); } */ ; --=-WkuJNQQGw+AXQ14kxZP8-- bisonc++-4.13.01/documentation/regression/polymorphic/input0000644000175000017500000000002412633316117022726 0ustar frankfrankconst abc j = xyz; bisonc++-4.13.01/documentation/regression/polymorphic/parser/0000755000175000017500000000000012633316117023144 5ustar frankfrankbisonc++-4.13.01/documentation/regression/polymorphic/parser/parser.h0000644000175000017500000000173612633316117024620 0ustar frankfrank// Generated by Bisonc++ V4.10.00 on Mon, 27 Apr 2015 13:34:25 +0200 #ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #undef Parser class Parser: public ParserBase { // $insert scannerobject Scanner d_scanner; public: Parser(); int parse(); private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); }; inline Parser::Parser() : d_scanner(&d_val__) {} #endif bisonc++-4.13.01/documentation/regression/polymorphic/parser/grammar0000644000175000017500000000160712633316117024521 0ustar frankfrank%filenames parser %scanner ../scanner/scanner.h %debug %baseclass-preinclude preinclude.h %stype SemVal %token BOOL INT CONST IDENTIFIER %% // at: const abc j = xyz; // output should be: type_specifier j scoped_identifier constant_definition: CONST type_specifier IDENTIFIER { cout << "Mid-rule: " << $2 << ", " << $3 << endl; } '=' scoped_identifier ';' { cout << $2 << " " << $3 << " " << $6 << endl; } ; type_specifier: BOOL { $$ = new Ident("type_specifier"); } | INT { $$ = new Ident("type_specifier"); } | scoped_identifier { $$ = new Ident("s-type"); } ; scoped_identifier: IDENTIFIER { $$ = new Ident("scoped"); } | ':' IDENTIFIER { $$ = new Ident("scoped"); } | scoped_identifier ':' IDENTIFIER { $$ = new Ident("scoped"); } ; bisonc++-4.13.01/documentation/regression/polymorphic/parser/parser.ih0000644000175000017500000000155212633316117024765 0ustar frankfrank// Generated by Bisonc++ V4.10.00 on Mon, 27 Apr 2015 13:34:25 +0200 // Include this file in the sources of the class Parser. // $insert class.h #include "parser.h" inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::print() { print__(); // displays tokens if --print was specified } inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } // Add here includes that are only required for the compilation // of Parser's sources. // UN-comment the next using-declaration if you want to use // int Parser's sources symbols from the namespace std without // specifying std:: //using namespace std; bisonc++-4.13.01/documentation/regression/polymorphic/parser/preinclude.h0000644000175000017500000000012412633316117025444 0ustar frankfrank#include "../enum/enum.h" #include "../ident/ident.h" #include "../semval/semval.h" bisonc++-4.13.01/documentation/regression/polymorphic/doc0000644000175000017500000000056012633316117022341 0ustar frankfrankThis is the original example resulting from a discussion I had with Dallas A. Clement about polymorphic semantic values. The program creates a simple parser that can be run as follows: demo < input When run as demo x < input debug output is shown, including selected semantic values on the semantic value stack. Also: see the files `dallas' and `dallas2' bisonc++-4.13.01/documentation/regression/polymorphic/semval/0000755000175000017500000000000012633316117023137 5ustar frankfrankbisonc++-4.13.01/documentation/regression/polymorphic/semval/semval.h0000644000175000017500000000267312633316117024607 0ustar frankfrank#ifndef _INCLUDED_SEMVAL_ #define _INCLUDED_SEMVAL_ #include "../base/base.h" class SemVal { Base *d_bp; public: SemVal(); // Never used by me, but needed to // enlarge the semantic value stack SemVal(Base *bp); // Semval will own bp SemVal(SemVal const &other); ~SemVal(); SemVal &operator=(SemVal const &other); Base const &base() const; template Class const &downcast(); private: void copy(SemVal const &other); void destroy(); }; inline Base const &SemVal::base() const { return *d_bp; } inline SemVal::SemVal() : d_bp(0) {} inline SemVal::SemVal(Base *bp) : d_bp(bp) {} inline SemVal::~SemVal() { destroy(); } inline SemVal::SemVal(SemVal const &other) { copy(other); } inline SemVal &SemVal::operator=(SemVal const &other) { if (this != &other) { destroy(); copy(other); } return *this; } inline void SemVal::copy(SemVal const &other) { d_bp = other.d_bp ? other.d_bp->clone() : 0; } inline void SemVal::destroy() { delete d_bp; } template inline Class const &SemVal::downcast() { return dynamic_cast(*d_bp); } inline ostream &operator<<(ostream &out, SemVal const &obj) { if (&obj.base()) return out << obj.base(); return out << "<0>"; } #endif bisonc++-4.13.01/documentation/regression/polymorphic/main.ih0000644000175000017500000000026412633316117023120 0ustar frankfrank#include #include #include #include #include "semval/semval.h" #include "ident/ident.h" #include "enum/enum.h" #include "parser/parser.h" bisonc++-4.13.01/documentation/regression/polymorphic/enum/0000755000175000017500000000000012633316117022614 5ustar frankfrankbisonc++-4.13.01/documentation/regression/polymorphic/enum/enum.ih0000644000175000017500000000005112633316117024076 0ustar frankfrank#include "enum.h" using namespace std; bisonc++-4.13.01/documentation/regression/polymorphic/enum/enum.h0000644000175000017500000000131112633316117023725 0ustar frankfrank#ifndef _INCLUDED_ENUM_ #define _INCLUDED_ENUM_ #include "../base/base.h" class Enum: public Base { public: enum Value { ZERO, ONE, TWO }; private: Value d_value; public: Enum(Value v); virtual Base *clone() const; Value value() const; // directly access the value virtual ostream &insert(ostream &os) const; }; inline Enum::Enum(Value v) : d_value(v) {} inline Base *Enum::clone() const { return new Enum(*this); } inline Enum::Value Enum::value() const { return d_value; } inline ostream &Enum::insert(ostream &out) const { return out << d_value; } #endif bisonc++-4.13.01/documentation/regression/polymorphic/base/0000755000175000017500000000000012633316117022562 5ustar frankfrankbisonc++-4.13.01/documentation/regression/polymorphic/base/base.ih0000644000175000017500000000005112633316117024012 0ustar frankfrank#include "base.h" using namespace std; bisonc++-4.13.01/documentation/regression/polymorphic/base/base.h0000644000175000017500000000060412633316117023645 0ustar frankfrank#ifndef _INCLUDED_BASE_ #define _INCLUDED_BASE_ // DON'T do this in real life: using namespace std; class Base { public: virtual ~Base(); virtual Base *clone() const = 0; virtual ostream &insert(ostream &os) const = 0; }; inline Base::~Base() {} inline ostream &operator<<(ostream &out, Base const &obj) { return obj.insert(out); } #endif bisonc++-4.13.01/documentation/regression/polymorphic/demo.cc0000644000175000017500000000125112633316117023102 0ustar frankfrank/* demo.cc */ #include "main.ih" int main(int argc, char **argv) { if (isatty(STDIN_FILENO)) { cout << "Run the program as `demo < input'\n" "Use any program argument to view parser's debug output\n"; } Parser parser; parser.setDebug(argc > 1); cout << "When input-redirecting `input' (e.g., `demo < input') the output" " should be:\n" " Mid-rule: s-type, j\n" " s-type j scoped\n" "\n"; int ret = parser.parse(); cout << "\n" "Parser returns " << ret << endl; return 0; } bisonc++-4.13.01/documentation/regression/danglingelse/0000755000175000017500000000000012633316117021737 5ustar frankfrankbisonc++-4.13.01/documentation/regression/danglingelse/parser/0000755000175000017500000000000012633316117023233 5ustar frankfrankbisonc++-4.13.01/documentation/regression/danglingelse/parser/bgram0000644000175000017500000000013212633316117024242 0ustar frankfrank%token EXPR ELSE %% stmnt: EXPR stmnt | EXPR stmnt ELSE stmnt | EXPR ';' ; bisonc++-4.13.01/documentation/regression/danglingelse/parser/grammar0000644000175000017500000000013212633316117024600 0ustar frankfrank%token EXPR ELSE %% stmnt: EXPR stmnt | EXPR stmnt ELSE stmnt | EXPR ';' ; bisonc++-4.13.01/documentation/regression/danglingelse/doc0000644000175000017500000000031412633316117022425 0ustar frankfrankThe well-known dangling-else problem. This grammar shows one S/R conflict, which is solved as SHIFT, using the default S/R conflict resolution method. By using %expect 1 the S/R warning can be prevented bisonc++-4.13.01/documentation/regression/error/0000755000175000017500000000000012633316117020434 5ustar frankfrankbisonc++-4.13.01/documentation/regression/error/scanner/0000755000175000017500000000000012633316117022065 5ustar frankfrankbisonc++-4.13.01/documentation/regression/error/scanner/lexer0000644000175000017500000000031712633316117023130 0ustar frankfrank%filenames scanner %interactive %% [ \t]+ // Often used: skip white space [0-9]+ return Parser::NR; .|\n return matched()[0]; bisonc++-4.13.01/documentation/regression/error/scanner/scanner.ih0000644000175000017500000000007012633316117024035 0ustar frankfrank#include "scanner.h" #include "../parser/Parserbase.h" bisonc++-4.13.01/documentation/regression/error/scanner/scanner.h0000644000175000017500000000172712633316117023676 0ustar frankfrank#ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { public: explicit Scanner(std::istream &in = std::cin, std::ostream &out = std::cout); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; // $insert scannerConstructors inline Scanner::Scanner(std::istream &in, std::ostream &out) : ScannerBase(in, out) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::postCode(PostEnum__) {} inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/regression/error/parser/0000755000175000017500000000000012633316117021730 5ustar frankfrankbisonc++-4.13.01/documentation/regression/error/parser/bgram0000644000175000017500000000013112633316117022736 0ustar frankfrank %token NR %% lines: lines line | line ; line: NR '\n' | error '\n' ; bisonc++-4.13.01/documentation/regression/error/parser/grammar0000644000175000017500000000033612633316117023303 0ustar frankfrank%scanner ../scanner/scanner.h %token NR %print-tokens %% lines: lines line | line ; line: NR '\n' { std::cout << " OK\n"; } | error '\n' { std::cout << " ERROR\n"; } ; bisonc++-4.13.01/documentation/regression/error/doc0000644000175000017500000000043612633316117021127 0ustar frankfrankThis example offers a demonstration of error-handling. Enter lines, each containing one integral value. Use ^C to end the program other line content should result in a `syntax error' blanks and tabs are ignored (and thus OK) An empty line, however, *will* result in a `syntax error' bisonc++-4.13.01/documentation/regression/error/demo.cc0000644000175000017500000000112212633316117021663 0ustar frankfrank/* error.cc */ #include #include "parser/Parser.h" using namespace std; int main(int argc, char **argv) { cout << "Enter lines, each containing one integral value.\n" "Other lines (also empty lines) should result in a " "`syntax error'\n" "Blanks and tabs are ignored. Use ^c or ^d to end the program\n" "Use any program argument to view parser debug output\n"; Parser parser; parser.setDebug(argc > 1); parser.parse(); return 0; } bisonc++-4.13.01/documentation/regression/fun/0000755000175000017500000000000012633316117020073 5ustar frankfrankbisonc++-4.13.01/documentation/regression/fun/scanner/0000755000175000017500000000000012633316117021524 5ustar frankfrankbisonc++-4.13.01/documentation/regression/fun/scanner/lexer0000644000175000017500000000412112633316117022564 0ustar frankfrank%interactive %filenames scanner digits [0-9]+ e [eE] sign [-+] %% [ \t]+ {digits} return Parser::INT; {digits}"." | {digits}"."{digits} | {digits}{e}{sign}{digits} | {digits}"."{e}{sign}{digits} | {digits}"."{digits}{e}{sign}{digits} | "."{digits} | "."{e}{sign}{digits} | "."{digits}{e}{sign}{digits} return Parser::DOUBLE; "'"."'" return Parser::CHAR; "+=" return Parser::ADDA; "-=" return Parser::SUBA; "*=" return Parser::MULA; "/=" return Parser::DIVA; "%=" return Parser::MODA; "&=" return Parser::ANDA; "^=" return Parser::XORA; "|=" return Parser::ORA; "<<=" return Parser::LSHIFTA; ">>=" return Parser::RSHIFTA; "<<" return Parser::LEFTSHIFT; ">>" return Parser::RIGHTSHIFT; char | int | double return Parser::DATATYPE; help return Parser::HELP; rad | deg | grad return Parser::ANGLETYPE; list return Parser::LIST; stop | quit | exit return Parser::QUIT; PI | E return Parser::MATHCONST; [a-zA-Z_][a-zA-Z_0-9]* return Parser::IDENT; \n|. return matched()[0]; %% bisonc++-4.13.01/documentation/regression/fun/scanner/scanner.ih0000644000175000017500000000006712633316117023502 0ustar frankfrank#include "scanner.h" #include "../parser/parserbase.h" bisonc++-4.13.01/documentation/regression/fun/scanner/scanner.h0000644000175000017500000000203512633316117023326 0ustar frankfrank// Generated by Flexc++ V0.93.00 on Mon, 20 Feb 2012 13:06:48 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { public: explicit Scanner(std::istream &in = std::cin, std::ostream &out = std::cout); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; inline void Scanner::postCode(PostEnum__) {} // $insert scannerConstructors inline Scanner::Scanner(std::istream &in, std::ostream &out) : ScannerBase(in, out) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/regression/fun/parser/0000755000175000017500000000000012633316117021367 5ustar frankfrankbisonc++-4.13.01/documentation/regression/fun/parser/_storeident.cc0000644000175000017500000000013312633316117024212 0ustar frankfrank#include "parser.ih" void Parser::storeIdent() { d_lastIdent = d_scanner.matched(); } bisonc++-4.13.01/documentation/regression/fun/parser/_lvalue.cc0000644000175000017500000000054412633316117023330 0ustar frankfrank#include "parser.ih" RuleValue &Parser::lvalue(RuleValue &e) { return e.tag() == RuleValue::VARIABLE ? d_value[e.varIdx()] : e; } RuleValue const &Parser::rvalue(RuleValue const &e) const { return e.tag() == RuleValue::VARIABLE ? d_value[e.varIdx()] : e; } bisonc++-4.13.01/documentation/regression/fun/parser/_unary.cc0000644000175000017500000000143212633316117023173 0ustar frankfrank#include "parser.ih" RuleValue Parser::unary(int operation, RuleValue const &e) { if (d_error) return e; error(e.tag() == RuleValue::FUNCTION, "Function names have no values. Forgot argument(s)?"); RuleValue value = rvalue(e); switch (operation) { case RuleValue::CHAR: return RuleValue(value.asChar()); case RuleValue::INT: return RuleValue(value.asInt()); case RuleValue::DOUBLE: return RuleValue(value.asDouble()); case '-': return RuleValue(-value); case '~': integral(value); return RuleValue(~value); default: error(true, "Illegal operand for unary operator"); break; } return value; // not reached } bisonc++-4.13.01/documentation/regression/fun/parser/bgram0000644000175000017500000000620012633316117022400 0ustar frankfrank%token IDENT QUIT LIST INT CHAR DOUBLE DATATYPE ANGLETYPE MATHCONST HELP // A of 'assignment'. Priorities: see C book, p. 518 %right '=' ADDA SUBA MULA DIVA MODA ANDA XORA ORA LSHIFTA RSHIFTA %left '|' %left '^' %left '&' %left LEFTSHIFT RIGHTSHIFT %left '+' '-' %left '*' '/' '%' %right uMinus %left '(' %% start: prompt lines ; lines: lines line | line ; line: action prompt ; action: cmd '\n' | error ; prompt: { prompt(); } ; cmd: QUIT { ACCEPT(); } | LIST { list(); } | help { help(); } | angletype | expr { display($1); } | // empty ; angletype: ANGLETYPE { setAngleType(); } ; expr: MATHCONST { $$ = mathConst(); } | CHAR { $$ = newValue(CHAR); } | INT { $$ = newValue(INT); } | DOUBLE { $$ = newValue(DOUBLE); } | IDENT { $$ = newValue(IDENT); } | expr '(' arglist ')' { $$ = call($1, $3); } | '-' expr %prec uMinus { $$ = unary('-', $2); } | '~' expr %prec uMinus { $$ = unary('~', $2); } | '(' type ')' expr %prec uMinus { $$ = unary($2.value().getInt(), $4); } | expr '+' expr { $$ = binary($1, '+', $3); } | expr '-' expr { $$ = binary($1, '-', $3); } | expr '*' expr { $$ = binary($1, '*', $3); } | expr '/' expr { $$ = binary($1, '/', $3); } | expr '%' expr { $$ = binary($1, '%', $3); } | expr '&' expr { $$ = binary($1, '&', $3); } | expr '^' expr { $$ = binary($1, '^', $3); } | expr '|' expr { $$ = binary($1, '|', $3); } | expr RIGHTSHIFT expr { $$ = binary($1, RIGHTSHIFT, $3); } | expr LEFTSHIFT expr { $$ = binary($1, LEFTSHIFT, $3); } | '(' expr ')' { $$ = $2; } | expr '=' expr { $$ = assign($1, '=', $3); } | expr ADDA expr { $$ = assign($1, ADDA, $3); } | expr SUBA expr { $$ = assign($1, SUBA, $3); } | expr MULA expr { $$ = assign($1, MULA, $3); } | expr DIVA expr { $$ = assign($1, DIVA, $3); } | expr MODA expr { $$ = assign($1, MODA, $3); } | expr ANDA expr { $$ = assign($1, ANDA, $3); } | expr ORA expr { $$ = assign($1, ORA, $3); } | expr XORA expr { $$ = assign($1, XORA, $3); } | expr LSHIFTA expr { $$ = assign($1, LSHIFTA, $3); } | expr RSHIFTA expr { $$ = assign($1, RSHIFTA, $3); } ; arglist: arglist ',' expr { $$ = addArg($1, $3); } | expr { $$ = firstArg($1); } ; type: DATATYPE { $$ = setDataType(); } ; help: '?' | HELP ; bisonc++-4.13.01/documentation/regression/fun/parser/_angle.cc0000644000175000017500000000044612633316117023127 0ustar frankfrank#include "parser.ih" double Parser::angle(double radians) // convert radians to angle type { return d_angleType == RADIANS ? radians : d_angleType == DEG360 ? 180 / M_PI * radians : 200 / M_PI * radians; } bisonc++-4.13.01/documentation/regression/fun/parser/parser.h0000644000175000017500000000705012633316117023036 0ustar frankfrank// Generated by Bisonc++ V4.10.00 on Mon, 27 Apr 2015 13:24:32 +0200 #ifndef Parser_h_included #define Parser_h_included #include #include #include #include #include "../rulevalue/_rulevalue.h" // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #undef Parser class Parser: public ParserBase { public: typedef std::map FunctionMap; typedef std::map DoubleMap; private: enum AngleType { RADIANS, DEG360, DEG400 }; typedef std::map SymbolMap; struct ShowVar { std::vector &d_value; ShowVar(std::vector &value); void operator()(SymbolMap::value_type &v); }; // $insert scannerobject Scanner d_scanner; std::string d_lastIdent; AngleType d_angleType; bool d_error; SymbolMap d_symtab; std::vector d_value; static FunctionMap s_functions; static DoubleMap s_doubles; public: Parser(); int parse(); private: RuleValue call(RuleValue const &funName, RuleValue &argv); void error(char const *msg); // called on (syntax) errors // called on semantic errors void error(bool ifTrue, char const *msg); int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc RuleValue &addArg(RuleValue &argv, RuleValue &arg); double angle(double radians); RuleValue &assign(RuleValue &lvalue, int operation, RuleValue const &rvalue); RuleValue binary(RuleValue lvalue, int operation, RuleValue const &rvalue); void display(RuleValue const &e); void div0(RuleValue const &vl, RuleValue const &vr); RuleValue firstArg(RuleValue &rv); RuleValue function(RuleValue const &fun, RuleValue const &e); RuleValue function(RuleValue const &fun, RuleValue const &arg1, RuleValue const &arg2); void integral(RuleValue const &v1); void help(); void integral(RuleValue const &v1, RuleValue const &v2); void list(); RuleValue &lvalue(RuleValue &e); RuleValue mathConst(); void prompt(); double radians(double angle); RuleValue const &rvalue(RuleValue const &e) const; void setAngleType(); RuleValue setDataType(); RuleValue setFunction(); void storeIdent(); RuleValue unary(int operation, RuleValue const &e); RuleValue newValue(int typeToken); RuleValue variable(); RuleValue identValue(); // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); }; inline Parser::Parser() : d_value(1) {} #endif bisonc++-4.13.01/documentation/regression/fun/parser/_assign.cc0000644000175000017500000000256012633316117023324 0ustar frankfrank#include "parser.ih" RuleValue &Parser::assign(RuleValue &v1, int operation, RuleValue const &v2) { if (d_error) return v1; error(v1.tag() != RuleValue::VARIABLE, "Non-lvalue in assignment"); error( v2.tag() == RuleValue::FUNCTION, "Function names have no values. Forgot arguments?" ); RuleValue &value = lvalue(v1); RuleValue const &rv2 = rvalue(v2); switch (operation) { case '=': value = rv2; break; case ADDA: value += rv2; break; case SUBA: value -= rv2; break; case MULA: value *= rv2; break; case DIVA: div0(value, rv2); value /= rv2; break; case MODA: div0(value, rv2); integral(value, rv2); value %= rv2; break; case ANDA: integral(value, rv2); value &= rv2; break; case XORA: integral(value, rv2); value ^= rv2; break; case ORA: integral(value, rv2); value ^= rv2; break; case RSHIFTA: integral(value, rv2); value >>= rv2; break; case LSHIFTA: integral(value, rv2); value >>= rv2; break; } return v1; } bisonc++-4.13.01/documentation/regression/fun/parser/_newvalue.cc0000644000175000017500000000066112633316117023666 0ustar frankfrank#include "parser.ih" RuleValue Parser::newValue(int typeToken) { switch (typeToken) { case CHAR: return RuleValue( static_cast(A2x(d_scanner.matched().substr(1)))); case INT: return RuleValue(static_cast(A2x(d_scanner.matched()))); case IDENT: return identValue(); } return RuleValue(static_cast(A2x(d_scanner.matched()))); } bisonc++-4.13.01/documentation/regression/fun/parser/_list.cc0000644000175000017500000000027112633316117023010 0ustar frankfrank#include "parser.ih" void Parser::list() { if (!d_symtab.size()) cout << "No variables\n"; else for_each(d_symtab.begin(), d_symtab.end(), ShowVar(d_value)); } bisonc++-4.13.01/documentation/regression/fun/parser/_radians.cc0000644000175000017500000000042612633316117023460 0ustar frankfrank#include "parser.ih" double Parser::radians(double angle) // convert angles to radians { return d_angleType == RADIANS ? angle : d_angleType == DEG360 ? M_PI / 180 * angle : M_PI / 200 * angle; } bisonc++-4.13.01/documentation/regression/fun/parser/_call.cc0000644000175000017500000000231112633316117022745 0ustar frankfrank#include "parser.ih" RuleValue Parser::call(RuleValue const &function, RuleValue &argv) { RuleValue ret; if (!d_error) { error(function.tag() != RuleValue::FUNCTION, "No such function"); error(function.fun().arity() != argv.size(), "Incorrect number of arguments"); switch(argv.size()) { case 1: { double value = rvalue(argv[0]).asDouble(); if ( function.fun().type() == RuleValue::Function::RAD_IN_DOUBLE_OUT ) value = radians(value); value = (function.fun().unary())(value); if ( function.fun().type() == RuleValue::Function::DOUBLE_IN_RAD_OUT ) value = angle(value); ret = value; } break; case 2: ret = (function.fun().binary())(rvalue(argv[0]).asDouble(), rvalue(argv[1]).asDouble()); break; } } return ret; } bisonc++-4.13.01/documentation/regression/fun/parser/_prompt.cc0000644000175000017500000000014012633316117023351 0ustar frankfrank#include "parser.ih" void Parser::prompt() { cout << "? " << flush; d_error = false; } bisonc++-4.13.01/documentation/regression/fun/parser/_setangletype.cc0000644000175000017500000000067712633316117024553 0ustar frankfrank#include "parser.ih" void Parser::setAngleType() { string type = d_scanner.matched(); if (type == "rad") { cout << "Angles in radians\n"; d_angleType = RADIANS; } else if (type == "deg") { cout << "Angles in 360-degrees circles\n"; d_angleType = DEG360; } else { cout << "Angles in 400-degrees circles\n"; d_angleType = DEG400; } } bisonc++-4.13.01/documentation/regression/fun/parser/grammar0000644000175000017500000000636312633316117022750 0ustar frankfrank%scanner ../scanner/scanner.h %baseclass-preinclude ../rulevalue/_rulevalue.h %filenames parser //%debug %stype RuleValue %token IDENT QUIT LIST INT CHAR DOUBLE DATATYPE ANGLETYPE MATHCONST HELP // A of 'assignment'. Priorities: see C book, p. 518 %right '=' ADDA SUBA MULA DIVA MODA ANDA XORA ORA LSHIFTA RSHIFTA %left '|' %left '^' %left '&' %left LEFTSHIFT RIGHTSHIFT %left '+' '-' %left '*' '/' '%' %right uMinus %left '(' %% start: prompt lines ; lines: lines line | line ; line: action prompt ; action: cmd '\n' | error ; prompt: { prompt(); } ; cmd: QUIT { ACCEPT(); } | LIST { list(); } | help { help(); } | angletype | expr { display($1); } | // empty ; angletype: ANGLETYPE { setAngleType(); } ; expr: MATHCONST { $$ = mathConst(); } | CHAR { $$ = newValue(CHAR); } | INT { $$ = newValue(INT); } | DOUBLE { $$ = newValue(DOUBLE); } | IDENT { $$ = newValue(IDENT); } | expr '(' arglist ')' { $$ = call($1, $3); } | '-' expr %prec uMinus { $$ = unary('-', $2); } | '~' expr %prec uMinus { $$ = unary('~', $2); } | '(' type ')' expr %prec uMinus { $$ = unary($2.asInt(), $4); } | expr '+' expr { $$ = binary($1, '+', $3); } | expr '-' expr { $$ = binary($1, '-', $3); } | expr '*' expr { $$ = binary($1, '*', $3); } | expr '/' expr { $$ = binary($1, '/', $3); } | expr '%' expr { $$ = binary($1, '%', $3); } | expr '&' expr { $$ = binary($1, '&', $3); } | expr '^' expr { $$ = binary($1, '^', $3); } | expr '|' expr { $$ = binary($1, '|', $3); } | expr RIGHTSHIFT expr { $$ = binary($1, RIGHTSHIFT, $3); } | expr LEFTSHIFT expr { $$ = binary($1, LEFTSHIFT, $3); } | '(' expr ')' { $$ = $2; } | expr '=' expr { $$ = assign($1, '=', $3); } | expr ADDA expr { $$ = assign($1, ADDA, $3); } | expr SUBA expr { $$ = assign($1, SUBA, $3); } | expr MULA expr { $$ = assign($1, MULA, $3); } | expr DIVA expr { $$ = assign($1, DIVA, $3); } | expr MODA expr { $$ = assign($1, MODA, $3); } | expr ANDA expr { $$ = assign($1, ANDA, $3); } | expr ORA expr { $$ = assign($1, ORA, $3); } | expr XORA expr { $$ = assign($1, XORA, $3); } | expr LSHIFTA expr { $$ = assign($1, LSHIFTA, $3); } | expr RSHIFTA expr { $$ = assign($1, RSHIFTA, $3); } ; arglist: arglist ',' expr { $$ = addArg($1, $3); } | expr { $$ = firstArg($1); } ; type: DATATYPE { $$ = setDataType(); } ; help: '?' | HELP ; bisonc++-4.13.01/documentation/regression/fun/parser/_error.cc0000644000175000017500000000036612633316117023173 0ustar frankfrank#include "parser.ih" void Parser::error(char const *) { if (!d_error) cout << "At " << d_scanner.matched() << ": error in expression.\n" "(" << static_cast(d_scanner.matched()[0]) << ")\n"; d_error = true; } bisonc++-4.13.01/documentation/regression/fun/parser/_identvalue.cc0000644000175000017500000000116212633316117024175 0ustar frankfrank#include "parser.ih" RuleValue Parser::identValue() { DoubleMap::iterator doubleIt = s_doubles.find(d_scanner.matched()); if (doubleIt != s_doubles.end()) return RuleValue(doubleIt->second); FunctionMap::iterator funIt = s_functions.find(d_scanner.matched()); if (funIt != s_functions.end()) return RuleValue(funIt->second); unsigned symbolIdx = d_symtab[d_scanner.matched()]; if (symbolIdx == 0) // new identifier { d_symtab[d_scanner.matched()] = symbolIdx = d_value.size(); d_value.push_back(0); } return RuleValue(symbolIdx); } bisonc++-4.13.01/documentation/regression/fun/parser/parser.ih0000644000175000017500000000206412633316117023207 0ustar frankfrank// Generated by Bisonc++ V4.10.00 on Mon, 27 Apr 2015 13:24:32 +0200 // Include this file in the sources of the class Parser. #include #include #include #include "_a2x.h" // $insert class.h #include "parser.h" inline Parser::ShowVar::ShowVar(std::vector &value) : d_value(value) {} inline RuleValue &Parser::addArg(RuleValue &argv, RuleValue &arg) { return argv.push_arg(arg); } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::print() { print__(); // displays tokens if --print was specified } inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } // Add here includes that are only required for the compilation // of Parser's sources. #include // UN-comment the next using-declaration if you want to use // int Parser's sources symbols from the namespace std without // specifying std:: using namespace std; bisonc++-4.13.01/documentation/regression/fun/parser/_setfunction.cc0000644000175000017500000000102112633316117024370 0ustar frankfrank//#include "parser.hh" // //RuleValue Parser::setFunction() //{ // stdFun1Map::iterator it; // see if it's a unary function // // it = s_stdfun1.find(d_lastIdent); // // if (it != s_stdfun1.end()) // return RuleValue(&*it); // // // stdFun2Map::iterator it2; // see if it's a binary function // // it2 = s_stdfun2.find(d_lastIdent); // // if (it2 != s_stdfun2.end()) // return RuleValue(&*it2); // // return variable(); // no, so it must be a variable //} bisonc++-4.13.01/documentation/regression/fun/parser/_variable.cc0000644000175000017500000000124012633316117023617 0ustar frankfrank// #include "parser.hh" // // RuleValue Parser::variable() // { // // symtab stores the indices of the values of the variables in the // // d_value vector. A new variable will not have an index, so then // // 0 is returned. In that case, the variable's value will be stored // // at the end of the vector, and the variable's index in d_symtab // // is updated accordingly. // // unsigned idx = d_symtab[d_lastIdent]; // // if (idx == 0) // new identifier // { // d_symtab[d_lastIdent] = idx = d_value.size(); // d_value.push_back(0); // } // // return RuleValue(idx, RuleValue::A_VARIABLE); // } bisonc++-4.13.01/documentation/regression/fun/parser/_firstarg.cc0000644000175000017500000000017012633316117023654 0ustar frankfrank#include "parser.ih" RuleValue Parser::firstArg(RuleValue &rv) { return RuleValue(new vector(1, rv)); } bisonc++-4.13.01/documentation/regression/fun/parser/_showvarfun.cc0000644000175000017500000000030612633316117024236 0ustar frankfrank#include "parser.ih" void Parser::ShowVar::operator()(SymbolMap::value_type &v) { cout << v.first << ": " << d_value[v.second] << " (" << d_value[v.second].tagName() << ")" << endl; } bisonc++-4.13.01/documentation/regression/fun/parser/_mathconst.cc0000644000175000017500000000030612633316117024034 0ustar frankfrank#include "parser.ih" RuleValue Parser::mathConst() { string c = d_scanner.matched(); return RuleValue( c == "E" ? M_E : M_PI ); } bisonc++-4.13.01/documentation/regression/fun/parser/_a2x.h0000644000175000017500000000251212633316117022371 0ustar frankfrank#ifndef _INCLUDED_BOBCAT_A2X_ #define _INCLUDED_BOBCAT_A2X_ #include #include class A2x: public std::istringstream { static bool s_lastFail; public: A2x(); A2x(char const *txt); // initialize from text A2x(std::string const &str); A2x(A2x const &other); template operator Type(); template Type to(); A2x &operator=(char const *txt); A2x &operator=(std::string const &str); A2x &operator=(A2x const &other); static bool lastFail(); }; inline A2x::A2x() {} inline A2x::A2x(char const *txt) // initialize from text : std::istringstream(txt) {} inline A2x::A2x(std::string const &str) : std::istringstream(str.c_str()) {} inline A2x::A2x(A2x const &other) : std::istringstream(other.str()) {} template inline Type A2x::to() { Type t; return (s_lastFail = (*this >> t).fail()) ? Type() : t; } template inline A2x::operator Type() { return to(); } inline A2x &A2x::operator=(std::string const &str) { return operator=(str.c_str()); } inline A2x &A2x::operator=(A2x const &other) { return operator=(other.str()); } inline bool A2x::lastFail() { return s_lastFail; } #endif bisonc++-4.13.01/documentation/regression/fun/parser/_integral.cc0000644000175000017500000000050112633316117023636 0ustar frankfrank#include "parser.ih" void Parser::integral(RuleValue const &v1) { error ( v1.tag() != RuleValue::CHAR && v1.tag() != RuleValue::INT, "Non-integral operand on integral operator" ); } void Parser::integral(RuleValue const &v1, RuleValue const &v2) { integral(v1); integral(v2); } bisonc++-4.13.01/documentation/regression/fun/parser/_binary.cc0000644000175000017500000000174512633316117023330 0ustar frankfrank#include "parser.ih" RuleValue Parser::binary(RuleValue v1, int operation, RuleValue const &v2) { error( v1.tag() == RuleValue::FUNCTION ||v2.tag() == RuleValue::FUNCTION, "Function names have no values. Forgot arguments?" ); switch (operation) { case '+': v1 += v2; break; case '-': v1 -= v2; break; case '*': v1 *= v2; break; case '/': div0(v1, v2); v1 /= v2; break; case '%': div0(v1, v2); integral(v1, v2); v1 %= v2; break; case '&': integral(v1, v2); v1 &= v2; break; case '^': v1 ^= v2; break; case '|': v1 |= v2; break; case RIGHTSHIFT: v1 >>= v2; break; case LEFTSHIFT: v1 <<= v2; break; } return v1; } bisonc++-4.13.01/documentation/regression/fun/parser/_display.cc0000644000175000017500000000017012633316117023500 0ustar frankfrank#include "parser.ih" void Parser::display(RuleValue const &e) { if (!d_error) cout << rvalue(e) << '\n'; } bisonc++-4.13.01/documentation/regression/fun/parser/_error2.cc0000644000175000017500000000026512633316117023253 0ustar frankfrank#include "parser.ih" void Parser::error(bool ifTrue, char const *msg) { if (d_error || not ifTrue) return; d_error = true; cout << msg << '\n'; ERROR(); } bisonc++-4.13.01/documentation/regression/fun/parser/_help.cc0000644000175000017500000000125412633316117022767 0ustar frankfrank#include "parser.ih" void Parser::help() { cout << "Operators:\n" " = += -= *= /= %= &= ^= |= <<= >>=\n" " |\n" " ^\n" " &\n" " << >>\n" " + -\n" " * / %\n" " (typecast) - (unary)\n" "Angle specification selectors: deg (= 360), grad (=400), rad\n" "Unary functions: abs, sqrt, exp, log, ln, log10,\n" " sin, cos, tan asin, acos, atan\n" "Binary function: pow\n" "Data types: char, int, double\n" "Variables are automatically defined when used, initialized to 0\n" "General usage: exit, quit, help, ?, list\n"; } bisonc++-4.13.01/documentation/regression/fun/parser/_setdatatype.cc0000644000175000017500000000043112633316117024362 0ustar frankfrank#include "parser.ih" RuleValue Parser::setDataType() { string type = d_scanner.matched(); return type == "char" ? RuleValue(RuleValue::CHAR) : type == "int" ? RuleValue(RuleValue::INT) : /* double */ RuleValue(RuleValue::DOUBLE); } bisonc++-4.13.01/documentation/regression/fun/parser/_data.cc0000644000175000017500000000332412633316117022750 0ustar frankfrank#include "parser.ih" namespace{ typedef Parser::FunctionMap FunMap; typedef RuleValue::Function Function; FunMap::value_type funArray[] = { FunMap::value_type("abs", Function(static_cast(abs))), FunMap::value_type("sqrt", Function(&sqrt)), FunMap::value_type("exp", Function(&exp)), FunMap::value_type("log", Function(&log)), FunMap::value_type("ln", Function(&log)), FunMap::value_type("log10", Function(&log10)), FunMap::value_type("sin", Function(&sin, Function::RAD_IN_DOUBLE_OUT)), FunMap::value_type("cos", Function(&cos, Function::RAD_IN_DOUBLE_OUT)), FunMap::value_type("tan", Function(&tan, Function::RAD_IN_DOUBLE_OUT)), FunMap::value_type("asin", Function(&asin, Function::DOUBLE_IN_RAD_OUT)), FunMap::value_type("acos", Function(&acos, Function::DOUBLE_IN_RAD_OUT)), FunMap::value_type("atan", Function(&atan, Function::DOUBLE_IN_RAD_OUT)), FunMap::value_type("pow", Function(&pow)), }; unsigned const sizeofFunctionArray = sizeof(funArray) / sizeof(FunMap::value_type); typedef Parser::DoubleMap::value_type DoubleValue; DoubleValue doubleArray[] = { DoubleValue("e", M_El), DoubleValue("E", M_El), DoubleValue("pi", M_PI), DoubleValue("PI", M_PI), }; unsigned const sizeofDoubleArray = sizeof(doubleArray) / sizeof(DoubleValue); }; Parser::FunctionMap Parser::s_functions( funArray, funArray + sizeofFunctionArray ); Parser::DoubleMap Parser::s_doubles( doubleArray, doubleArray + sizeofDoubleArray ); bool A2x::s_lastFail = false; bisonc++-4.13.01/documentation/regression/fun/parser/_div0.cc0000644000175000017500000000027412633316117022702 0ustar frankfrank#include "parser.ih" void Parser::div0(RuleValue const &vl, RuleValue const &vr) { error( abs(vl.asDouble()) > 1e100 * abs(vr.asDouble()), "Number too large" ); } bisonc++-4.13.01/documentation/regression/fun/doc0000644000175000017500000000051312633316117020562 0ustar frankfrankA more complex calculator offering types (char, int, double) and functions. Expressions and commands must be entered on lines. Compiling this program may take a little while. To end the program, enter: quit When the program has started the command help will provide an overview of its possibilities. bisonc++-4.13.01/documentation/regression/fun/rulevalue/0000755000175000017500000000000012633316117022077 5ustar frankfrankbisonc++-4.13.01/documentation/regression/fun/rulevalue/_rulevalue.h0000644000175000017500000001042312633316117024413 0ustar frankfrank#ifndef _INCLUDED_RULEVALUE_ #define _INCLUDED_RULEVALUE_ #include #include class RuleValue { public: typedef std::vector Args; struct Function { enum Type { DOUBLE_IN_DOUBLE_OUT, DOUBLE_IN_RAD_OUT, // fun expects double, returns radians RAD_IN_DOUBLE_OUT, // fun expects radians, returns double }; union Ptr { double (*unary)(double); double (*binary)(double, double); }; Type d_type; Ptr d_ptr; size_t d_arity; Function(double (*)(double), Type t = DOUBLE_IN_DOUBLE_OUT); Function(double (*)(double, double)); size_t arity() const; double (*unary() const)(double); double (*binary() const)(double, double); double value() const; Type type() const; }; private: union ValueUnion { char c; int i; double d; size_t s; Function const *f; Args *args; }; static char const *s_tagName[]; public: enum ValueTag // modify data.cc if this changes { ERROR, // something failed. CHAR, INT, DOUBLE, VARIABLE, FUNCTION, ARG_VECTOR, }; private: ValueTag d_tag; ValueUnion d_value; public: RuleValue(); RuleValue(char c); RuleValue(int i); RuleValue(double d); RuleValue(unsigned idx); RuleValue(Function const &funRef); RuleValue(Args *args); RuleValue(char const *ident); RuleValue(RuleValue const &other); ~RuleValue(); ValueTag tag() const; char const *tagName() const; char asChar() const; int asInt() const; double asDouble() const; Function const &fun() const; size_t varIdx() const; RuleValue operator-() const; RuleValue operator~() const; RuleValue &operator=(RuleValue const &other); RuleValue &operator+=(RuleValue const &other); RuleValue &operator-=(RuleValue const &other); RuleValue &operator*=(RuleValue const &other); RuleValue &operator/=(RuleValue const &other); RuleValue &operator%=(RuleValue const &other); RuleValue &operator&=(RuleValue const &other); RuleValue &operator^=(RuleValue const &other); RuleValue &operator|=(RuleValue const &other); RuleValue &operator<<=(RuleValue const &other); RuleValue &operator>>=(RuleValue const &other); RuleValue const &operator[](size_t idx) const; size_t size() const; RuleValue &push_arg(RuleValue const &other); private: void destroy(); void copy(RuleValue const &other); }; inline RuleValue::RuleValue(RuleValue const &other) { copy(other); } inline RuleValue::~RuleValue() { destroy(); } inline void RuleValue::destroy() { if (d_tag == ARG_VECTOR) delete d_value.args; } inline RuleValue const &RuleValue::operator[](size_t idx) const { return (*d_value.args)[idx]; } inline RuleValue RuleValue::operator~() const { return RuleValue(~asInt()); } inline RuleValue::ValueTag RuleValue::tag() const { return d_tag; } inline char const *RuleValue::tagName() const { return s_tagName[d_tag]; } inline size_t RuleValue::size() const { return d_value.args->size(); } inline size_t RuleValue::Function::arity() const { return d_arity; } inline RuleValue::Function::Type RuleValue::Function::type() const { return d_type; } inline double (*RuleValue::Function::unary() const)(double) { return d_ptr.unary; } inline double (*RuleValue::Function::binary() const)(double, double) { return d_ptr.binary; } inline RuleValue &RuleValue::push_arg(RuleValue const &value) { d_value.args->push_back(value); return *this; } inline size_t RuleValue::varIdx() const { return d_value.s; } namespace std { ostream &operator<<(ostream &out, RuleValue const &t); } #endif bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatoradda.cc0000644000175000017500000000051112633316117025207 0ustar frankfrank#include "_rulevalue.ih" RuleValue &RuleValue::operator+=(RuleValue const &other) { if (d_tag == DOUBLE || other.d_tag == DOUBLE) { d_value.d = asDouble() + other.asDouble(); d_tag = DOUBLE; } else { d_value.i = asInt() + other.asInt(); d_tag = INT; } return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_as.cc0000644000175000017500000000137212633316117023153 0ustar frankfrank#include "_rulevalue.ih" char RuleValue::asChar() const { switch(d_tag) { case CHAR: return d_value.c; case INT: return static_cast(d_value.i); default: return static_cast(d_value.d); } } int RuleValue::asInt() const { switch(d_tag) { case CHAR: return d_value.c; case INT: return d_value.i; default: return static_cast(d_value.d); } } double RuleValue::asDouble() const { switch(d_tag) { case CHAR: return d_value.c; case INT: return d_value.i; default: return d_value.d; } } RuleValue::Function const &RuleValue::fun() const { return *d_value.f; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatorinsert.cc0000644000175000017500000000056412633316117025632 0ustar frankfrank#include "_rulevalue.ih" namespace std { ostream &operator<<(ostream &out, RuleValue const &t) { switch (t.tag()) { case RuleValue::CHAR: return out << t.asChar(); case RuleValue::INT: return out << t.asInt(); default: return out << t.asDouble(); } } } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_function.cc0000644000175000017500000000044012633316117024370 0ustar frankfrank#include "_rulevalue.ih" RuleValue::Function::Function(double (*ptr)(double), Type t) : d_type(t), d_arity(1) { d_ptr.unary = ptr; } RuleValue::Function::Function(double (*ptr)(double, double)) : d_type(DOUBLE_IN_DOUBLE_OUT), d_arity(2) { d_ptr.binary = ptr; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatoramod.cc0000644000175000017500000000036012633316117025240 0ustar frankfrank#include "_rulevalue.ih" // Here, the parser makes sure that we're already using integral values RuleValue &RuleValue::operator%=(RuleValue const &other) { d_value.i = asInt() % other.asInt(); d_tag = INT; return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatoraor.cc0000644000175000017500000000036012633316117025101 0ustar frankfrank#include "_rulevalue.ih" // Here, the parser makes sure that we're already using integral values RuleValue &RuleValue::operator|=(RuleValue const &other) { d_value.i = asInt() | other.asInt(); d_tag = INT; return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_rulevalue.ih0000644000175000017500000000010412633316117024557 0ustar frankfrank#include "_rulevalue.h" #include using namespace std; bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatorarshift.cc0000644000175000017500000000036212633316117025762 0ustar frankfrank#include "_rulevalue.ih" // Here, the parser makes sure that we're already using integral values RuleValue &RuleValue::operator>>=(RuleValue const &other) { d_value.i = asInt() >> other.asInt(); d_tag = INT; return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatorasub.cc0000644000175000017500000000051112633316117025250 0ustar frankfrank#include "_rulevalue.ih" RuleValue &RuleValue::operator-=(RuleValue const &other) { if (d_tag == DOUBLE || other.d_tag == DOUBLE) { d_value.d = asDouble() - other.asDouble(); d_tag = DOUBLE; } else { d_value.i = asInt() - other.asInt(); d_tag = INT; } return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatornegate.cc0000644000175000017500000000041512633316117025564 0ustar frankfrank#include "_rulevalue.ih" RuleValue RuleValue::operator-() const { switch (d_tag) { case CHAR: return RuleValue(-asChar()); case INT: return RuleValue(-asInt()); default: return RuleValue(-asDouble()); } } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatoramul.cc0000644000175000017500000000051112633316117025254 0ustar frankfrank#include "_rulevalue.ih" RuleValue &RuleValue::operator*=(RuleValue const &other) { if (d_tag == DOUBLE || other.d_tag == DOUBLE) { d_value.d = asDouble() * other.asDouble(); d_tag = DOUBLE; } else { d_value.i = asInt() * other.asInt(); d_tag = INT; } return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatoraand.cc0000644000175000017500000000036012633316117025223 0ustar frankfrank#include "_rulevalue.ih" // Here, the parser makes sure that we're already using integral values RuleValue &RuleValue::operator&=(RuleValue const &other) { d_value.i = asInt() & other.asInt(); d_tag = INT; return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatoralshift.cc0000644000175000017500000000036212633316117025754 0ustar frankfrank#include "_rulevalue.ih" // Here, the parser makes sure that we're already using integral values RuleValue &RuleValue::operator<<=(RuleValue const &other) { d_value.i = asInt() << other.asInt(); d_tag = INT; return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_rulevalue1.cc0000644000175000017500000000104412633316117024631 0ustar frankfrank#include "_rulevalue.ih" RuleValue::RuleValue() : d_tag(ERROR) {} RuleValue::RuleValue(char c) : d_tag(CHAR) { d_value.c = c; } RuleValue::RuleValue(int i) : d_tag(INT) { d_value.i = i; } RuleValue::RuleValue(size_t s) : d_tag(VARIABLE) { d_value.s = s; } RuleValue::RuleValue(double d) : d_tag(DOUBLE) { d_value.d = d; } RuleValue::RuleValue(Function const &funRef) : d_tag(FUNCTION) { d_value.f = &funRef; } RuleValue::RuleValue(Args *args) : d_tag(ARG_VECTOR) { d_value.args = args; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatorassign.cc0000644000175000017500000000026512633316117025610 0ustar frankfrank#include "_rulevalue.ih" RuleValue &RuleValue::operator=(RuleValue const &other) { if (this != &other) { destroy(); copy(other); } return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatoraxor.cc0000644000175000017500000000036012633316117025271 0ustar frankfrank#include "_rulevalue.ih" // Here, the parser makes sure that we're already using integral values RuleValue &RuleValue::operator^=(RuleValue const &other) { d_value.i = asInt() ^ other.asInt(); d_tag = INT; return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_operatoradiv.cc0000644000175000017500000000051112633316117025241 0ustar frankfrank#include "_rulevalue.ih" RuleValue &RuleValue::operator/=(RuleValue const &other) { if (d_tag == DOUBLE || other.d_tag == DOUBLE) { d_value.d = asDouble() / other.asDouble(); d_tag = DOUBLE; } else { d_value.i = asInt() / other.asInt(); d_tag = INT; } return *this; } bisonc++-4.13.01/documentation/regression/fun/rulevalue/_data.cc0000644000175000017500000000032112633316117023452 0ustar frankfrank#include "_rulevalue.ih" char const *RuleValue::s_tagName[] = { "error", // something failed. "char", "int", "double", "variable", "function", "argsector", }; bisonc++-4.13.01/documentation/regression/fun/rulevalue/_copy.cc0000644000175000017500000000036412633316117023522 0ustar frankfrank#include "_rulevalue.ih" void RuleValue::copy(RuleValue const &other) { d_tag = other.d_tag; if (d_tag != ARG_VECTOR) d_value = other.d_value; else d_value.args = new std::vector(*other.d_value.args); } bisonc++-4.13.01/documentation/regression/fun/demo.cc0000644000175000017500000000101012633316117021316 0ustar frankfrank#include #include #include "parser/parser.h" using namespace std; int main(int argc, char **argv) { Parser parser; cout << "Enter expressions on separate lines\n" "Enter the command help for an overview of the program's " "features\n" "Use quit, or ^c to end the program\n" "Use any program argument to view parser debug output\n"; parser.setDebug(argc > 1); parser.parse(); return 0; } bisonc++-4.13.01/documentation/regression/aho4.42/0000755000175000017500000000000012633316117020362 5ustar frankfrankbisonc++-4.13.01/documentation/regression/aho4.42/parser/0000755000175000017500000000000012633316117021656 5ustar frankfrankbisonc++-4.13.01/documentation/regression/aho4.42/parser/bgram0000644000175000017500000000013012633316117022663 0ustar frankfrank/* AHO et al, example 4.42 */ %token c d %% S: C C ; C: c C | d ; bisonc++-4.13.01/documentation/regression/aho4.42/parser/grammar0000644000175000017500000000013012633316117023221 0ustar frankfrank/* AHO et al, example 4.42 */ %token c d %% S: C C ; C: c C | d ; bisonc++-4.13.01/documentation/regression/aho4.42/doc0000644000175000017500000000016412633316117021053 0ustar frankfrankThis grammar, given by AHO et al as grammar 4.42 (p. 231), has 7 states, no shift-reduce or reduce-reduce conflict. bisonc++-4.13.01/documentation/regression/naive/0000755000175000017500000000000012633316117020405 5ustar frankfrankbisonc++-4.13.01/documentation/regression/naive/scanner/0000755000175000017500000000000012633316117022036 5ustar frankfrankbisonc++-4.13.01/documentation/regression/naive/scanner/lexer0000644000175000017500000000036712633316117023106 0ustar frankfrank%interactive %filenames scanner %% [ \t]+ // Often used: skip white space \n // same [0-9]+ return Parser::NR; . return matched()[0]; bisonc++-4.13.01/documentation/regression/naive/scanner/scanner.ih0000644000175000017500000000006712633316117024014 0ustar frankfrank#include "scanner.h" #include "../parser/Parserbase.h" bisonc++-4.13.01/documentation/regression/naive/scanner/scanner.h0000644000175000017500000000203512633316117023640 0ustar frankfrank// Generated by Flexc++ V0.93.00 on Mon, 20 Feb 2012 10:57:08 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { public: explicit Scanner(std::istream &in = std::cin, std::ostream &out = std::cout); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; // $insert scannerConstructors inline Scanner::Scanner(std::istream &in, std::ostream &out) : ScannerBase(in, out) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::postCode(PostEnum__) {} inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/regression/naive/parser/0000755000175000017500000000000012633316117021701 5ustar frankfrankbisonc++-4.13.01/documentation/regression/naive/parser/bgram0000644000175000017500000000013212633316117022710 0ustar frankfrank %token NR %% lines: lines expr ';' | /* empty */ ; expr: NR | error ; bisonc++-4.13.01/documentation/regression/naive/parser/grammar0000644000175000017500000000034112633316117023250 0ustar frankfrank%scanner ../scanner/scanner.h %token NR %% lines: lines expr ';' | // empty ; expr: NR { std::cout << d_scanner.matched() << '\n'; } | error { std::cout << "oops...\n"; } ; bisonc++-4.13.01/documentation/regression/naive/doc0000644000175000017500000000045412633316117021100 0ustar frankfrank This is a parser for a simple series of ;-separated integral numbers. Any error input is skipped until the next ;-token, after which another number is expected. To end the program, press ctrl-d To view the parser's debugging output, start the program with an argument, e.g., demo x bisonc++-4.13.01/documentation/regression/naive/demo.cc0000644000175000017500000000071412633316117021642 0ustar frankfrank#include #include #include "parser/Parser.h" using namespace std; int main(int argc, char **argv) { cout << "Enter numbers separated by semicolons. Blanks and tabs are " "ignored.\n" "^c or ^d ends the program.\n" "Use any program argument to view parser debug output\n"; Parser parser; parser.setDebug(argc > 1); return parser.parse(); } bisonc++-4.13.01/documentation/regression/rr2/0000755000175000017500000000000012633316117020010 5ustar frankfrankbisonc++-4.13.01/documentation/regression/rr2/parser/0000755000175000017500000000000012633316117021304 5ustar frankfrankbisonc++-4.13.01/documentation/regression/rr2/parser/bgram0000644000175000017500000000036612633316117022324 0ustar frankfrank%token DELIM TEXT %% line: line mstring | /* empty */ ; mstring: mstring atom | atom ; atom: opt_delimiter TEXT ; opt_delimiter: opt_delimiter DELIM | /* empty */ ; bisonc++-4.13.01/documentation/regression/rr2/parser/grammar0000644000175000017500000000036612633316117022662 0ustar frankfrank%token DELIM TEXT %% line: line mstring | /* empty */ ; mstring: mstring atom | atom ; atom: opt_delimiter TEXT ; opt_delimiter: opt_delimiter DELIM | /* empty */ ; bisonc++-4.13.01/documentation/regression/rr2/doc0000644000175000017500000000022512633316117020477 0ustar frankfrankThis grammar was produced by several of my students during my 2005-2006 C++ course. Bisonc++ V 0.98 did not properly recognize its two RR conflicts. bisonc++-4.13.01/documentation/regression/conflicts/0000755000175000017500000000000012633316117021267 5ustar frankfrankbisonc++-4.13.01/documentation/regression/conflicts/parser/0000755000175000017500000000000012633316117022563 5ustar frankfrankbisonc++-4.13.01/documentation/regression/conflicts/parser/bgram0000644000175000017500000000007712633316117023602 0ustar frankfrank%token i %% E: i | i | E '+' E | E '*' E ; bisonc++-4.13.01/documentation/regression/conflicts/parser/grammar0000644000175000017500000000007712633316117024140 0ustar frankfrank%token i %% E: i | i | E '+' E | E '*' E ; bisonc++-4.13.01/documentation/regression/conflicts/doc0000644000175000017500000000132312633316117021756 0ustar frankfrankThis exampe features a grammar showing both S/R and R/R conflicts. The grammar produces 4 shift-reduce conflicts and 3 reduce-reduce conflicts. By default, the S/R conflicts are solved using SHIFT, while the R/R conflicts are solved using a reduction to the rule first mentioned in the grammar. Notes on the output generated by bison. Bison states: grammar:7.5: warning: rule never reduced because of conflicts: E: i However, this is probably incorrect considering its own documentation which states the `first rule' preference in case of R/R conflicts. Here is the grammar used in this example: %token i %% E: i | i | E '+' E | E '*' E ; bisonc++-4.13.01/documentation/regression/icmake2/0000755000175000017500000000000012633316117020616 5ustar frankfrankbisonc++-4.13.01/documentation/regression/icmake2/parser/0000755000175000017500000000000012633316117022112 5ustar frankfrankbisonc++-4.13.01/documentation/regression/icmake2/parser/bgram0000644000175000017500000005130112633316117023125 0ustar frankfrank%token ARG_HEAD ARG_TAIL ASCII BREAK CHDIR CMD_HEAD CMD_TAIL C_BASE C_EXT C_PATH G_BASE G_EXT G_PATH ELEMENT ELSE EXEC EXECUTE EXISTS EXIT FGETS FIELDS FOR FPRINTF GETENV GETCH GETPID GETS IDENTIFIER IF INT LIST MAKELIST M_ECHO NUMBER PRINTF PUTENV RETURN SIZEOFLIST STAT STRING STRINGTYPE STRLEN STRLWR STRUPR STRFIND SUBSTR SYSTEM VOID WHILE %right '=' AND_IS /* binary-assignment */ OR_IS XOR_IS SHL_IS SHR_IS DIV_IS /* arithmetic assignment */ MINUS_IS MUL_IS MOD_IS PLUS_IS %left OR %left AND %left '|' %left '^' %left '&' %left EQUAL NOT_EQUAL %left '<' '>' SMALLER_EQUAL GREATER_EQUAL OLDER YOUNGER %left SHL SHR %left '+' '-' %left '*' '/' '%' %right '!' INC DEC '~' %left '[' %expect 1 /* Grammar Rules */ %% input: input def_var_or_fun | def_var_or_fun ; /* A */ args: args comma err_expression { $$ = *multargs(&$1, &$3); } | err_expression { $$ = *firstarg(&$1); } ; /* B */ break_ok: { break_ok++; } ; break_stat: BREAK { $$ = *break_stmnt(); } ; /* C */ casttype: INT | LIST | STRINGTYPE ; backtick: {parse_error = err_backtick_expected; } '`' ; closebrace: {parse_error = err_closebrace_expected; } '}' ; closepar: {parse_error = err_closepar_expected; } ')' ; comma: {parse_error = err_comma_expected; } ',' ; comma_arglist: ',' args { $$ = $2; } | zeroframe ; comma_expr: ',' err_expression { $$ = $2; } | zeroframe ; compound: '{' /* } (for matching) */ statements closebrace { $$ = $2; } ; /* D */ def_var_or_fun: opt_vartype var_or_fun | voidtype funcdef ; /* E */ else_tail: ELSE statement { $$ = $2; } | zeroframe ; enterid: IDENTIFIER { entervar(); } ; entervarid: enterid { $$ = fetchvar(); } ; err_expression: { parse_error = err_in_expression; } expression { $$ = $2; } ; expression: expression '=' expression { $$ = *assign(&$1, &$3); } | expression '[' expression ']' { $$ = *indexOp(&$1, &$3); } | expression MUL_IS expression { $$ = *math_ass(&$1, &$3, multiply, "*="); } | expression DIV_IS expression { $$ = *math_ass(&$1, &$3, divide, "/="); } | expression MOD_IS expression { $$ = *math_ass(&$1, &$3, modulo, "%="); } | expression PLUS_IS expression { $$ = *math_ass(&$1, &$3, addition, "+="); } | expression MINUS_IS expression { $$ = *math_ass(&$1, &$3, subtract, "-="); } | expression AND_IS expression { $$ = *math_ass(&$1, &$3, band, "&="); } | expression OR_IS expression { $$ = *math_ass(&$1, &$3, bor, "|="); } | expression XOR_IS expression { $$ = *math_ass(&$1, &$3, xor, "^="); } | expression SHL_IS expression { $$ = *math_ass(&$1, &$3, shl, "<<="); } | expression SHR_IS expression { $$ = *math_ass(&$1, &$3, shr, ">>="); } | expression OR expression { $$ = *or_boolean(&$1, &$3); } | expression AND expression { $$ = *and_boolean(&$1, &$3); } | expression EQUAL expression { $$ = *equal(&$1, &$3); } | expression NOT_EQUAL expression { $$ = *unequal(&$1, &$3); } | expression '<' expression { $$ = *smaller(&$1, &$3); } | expression '>' expression { $$ = *greater(&$1, &$3); } | expression SMALLER_EQUAL expression { $$ = *sm_equal(&$1, &$3); } | expression GREATER_EQUAL expression { $$ = *gr_equal(&$1, &$3); } | expression '+' expression { $$ = *addition(&$1, &$3); } | expression '&' expression { $$ = *band(&$1, &$3); } | expression '|' expression { $$ = *bor(&$1, &$3); } | expression '^' expression { $$ = *xor(&$1, &$3); } | expression SHL expression { $$ = *shl(&$1, &$3); } | expression SHR expression { $$ = *shr(&$1, &$3); } | expression '-' expression { $$ = *subtract(&$1, &$3); } | expression '*' expression { $$ = *multiply(&$1, &$3); } | expression YOUNGER expression { $$ = *young(&$1, &$3); } | expression OLDER expression { $$ = *old(&$1, &$3); } | expression '/' expression { $$ = *divide(&$1, &$3); } | expression '%' expression { $$ = *modulo(&$1, &$3); } | '-' expression %prec '!' { $$ = *negate(&$2); } | INC expression { $$ = *incdec(pre_op, op_inc, &$2); } | expression INC { $$ = *incdec(post_op, op_inc, &$1); } | DEC expression { $$ = *incdec(pre_op, op_dec, &$2); } | expression DEC { $$ = *incdec(post_op, op_dec, &$1); } | '+' expression %prec '!' { $$ = $2; } | '~' expression %prec '!' { $$ = *bnot(&$2); } | '!' expression { $$ = *not_boolean(&$2); } | '(' casttype ')' expression %prec '!' { $$ = *cast($2.type, &$4); } | string { $$ = stackframe(e_str | e_const); } | NUMBER { $$ = stackframe(e_int | e_const); } | '(' expression closepar { $$ = $2; } | func_or_var | '`' expression backtick { $$ = *onearg(f_backtick, &$2); } ; expr_code: err_expression { $$ = *expr_stmnt(&$1); } ; expr_list: expr_list ',' expr_code { $$ = *catcode(&$1, &$3); } | expr_code ; /* F */ for: FOR nesting ; for_stat: for openpar opt_expr_list semicol opt_expression semicol opt_expr_list closepar break_ok statement popdead { $$ = *for_stmnt(&$3, &$5, &$7, &$10); } ; funcdef: funid funvars /* returns init code */ statements closebrace { close_fun(&$3); } ; func_or_var: function closepar | IDENTIFIER { $$ = fetchvar(); } ; function: zero_arg_funs /* getch() or gets() */ openpar { $$ = *zeroargs($1.type); } | one_arg_funs openpar err_expression { $$ = *onearg($1.type, &$3); } | two_arg_funs openpar err_expression comma err_expression { $$ = *twoargs($1.type, &$3, &$5); } | three_arg_funs openpar err_expression comma err_expression comma err_expression { $$ = *threeargs($1.type, &$3, &$5, &$7); } | optint_string /* CHDIR, SYSTEM, STAT */ openpar err_expression /* int inserted if string */ comma_expr /* may be string if first == int */ { $$ = *optint_string($1.type, &$3, &$4); } | optint_special /* EXEC, EXECUTE */ openpar /* alternatives: */ err_expression /* fun(int, string, ...) */ comma_arglist /* fun(string, ...) */ { $$ = *optint_special($1.type, &$3, &$4); } | PRINTF openpar args /* first may be anything */ { $$ = *specials(f_printf, &$3); } | FPRINTF openpar args /* argcount >= 2 required */ { $$ = *exec_fprintf($1.type, &$3); } | funname openpar opt_arglist { $$ = *callfun($1.evalue, &$3); } | makelist ; funid: IDENTIFIER { open_fun(); } ; funname: IDENTIFIER { $$.evalue = fetchfun(); } ; funvars: openpar opt_parlist ')' openbrace opt_locals { make_frame(); outbin($5.code, $5.codelen); } ; /* G */ /* H */ /* I */ idexpr: enterid zeroframe /* no explicit initialization */ | entervarid initassign expression { initialization = 0; $$ = *expr_stmnt(assign(&$1, &$3)); /* explicit initialization */ } ; if: IF nesting ; if_stat: if openpar err_expression closepar statement popdead pushdead else_tail popdead { $$ = *if_stmnt(&$3, &$5, &$8); } ; initassign: '=' { initialization = 1; } /* J */ /* K */ /* L */ leave_key: RETURN | EXIT ; local_list: type_of_var vardefs /* + semicol, initialization code */ { $$ = $2; } ; locals: locals local_list /* type + variables */ { $$ = *catcode(&$1, &$2); /* cat initialization code */ } | local_list /* initialization code of 1st var */ ; /* M */ makelist: /* makelist(expr) */ makelist_expr makelist_normal /* returns O_FILE expression */ { $$ = *makelist ( multargs ( firstarg(&$2), /* O_FILE is passed */ &$1 /* expression is passed */ ), op_hlt /* not op_younger or op_older */ ); } | /* makelist(expr, expr) */ makelist_expr comma err_expression { $$ = *makelist ( multargs ( firstarg(&$1), /* fileattribute is passed */ &$3 /* expression is passed */ ), op_hlt /* not op_younger or op_older */ ); } | makelist_expr /* makelist(expr, older, expr) */ comma older_younger comma err_expression makelist_normal { $$ = *makelist ( multargs ( multargs ( firstarg(&$6), /* O_FILE is passed */ &$1 /* 1st expression is passed */ ), &$5 /* 2nd expression is passed */ ), $3.type /* older/younger */ ); } | makelist_expr /* makelist(expr, expr, older, expr) */ comma err_expression comma older_younger comma err_expression { $$ = *makelist ( multargs ( multargs ( firstarg(&$1), /* attribute is passed */ &$3 /* 2nd expression is passed */ ), &$7 /* 3rd expression is passed */ ), $5.type /* older/younger */ ); } ; makelist_expr: MAKELIST openpar err_expression { $$ = $3; } ; makelist_normal: { $$ = stackframe(e_int | e_const); $$.evalue = O_FILE; } ; /* N */ nesting: pushdead { nestlevel++; } ; /* O */ ok: ';' { yyerrok; } ; older_younger: {parse_error = err_older_younger; } old_young { $$ = $2; } ; old_young: OLDER | YOUNGER ; one_arg_funs: ASCII | SIZEOFLIST | EXISTS | M_ECHO | CMD_TAIL | CMD_HEAD | ARG_HEAD | ARG_TAIL | G_BASE | G_PATH | G_EXT | PUTENV | GETENV | STRLEN | STRUPR | STRLWR ; openpar: {parse_error = err_openpar_expected; } '(' ; openbrace: {parse_error = err_openbrace_expected; } '{' ; /* } for matching */ opt_arglist: args | zeroframe ; opt_expression: err_expression | { $$ = stackframe(e_int | e_const); $$.evalue = 1; } ; opt_expr_list: expr_list | zeroframe ; optint_special: EXEC /* optional int allowed */ | EXECUTE ; optint_string: STAT | CHDIR | SYSTEM ; opt_locals: locals /* initialization code */ | zeroframe /* empty init. code */ ; opt_parlist: pars | ; opt_vartype: type_of_var | { vartype = e_int; } ; /* P */ pars: pars comma partype | partype ; partype: type_of_var enterid { n_params++; } ; popdead: { pop_dead(); } ; pushdead: { push_dead(); /* set new dead-level */ } ; /* Q */ /* R */ return_stat: leave_key return_tail { $$ = *return_stmnt($1.type, &$2); } ; return_tail: err_expression | zeroframe ; /* S */ semicol: {parse_error = err_semicol_expected; } ';' ; statement: stm { sem_err = 0; } ; statements: statements statement { $$ = *cat_stmnt(&$1, &$2); } | zeroframe ; stm: compound | ';' zeroframe { $$ = $1; } | expr_code semicol | while_stat | if_stat | for_stat | return_stat semicol | break_stat semicol | error ok ; string: string STRING { stringbuf = xstrcat(stringbuf, string);/* catenate the new string */ } | STRING { free(stringbuf); /* free former string */ stringbuf = xstrdup(string); /* duplicate initial string */ } ; /* T */ two_arg_funs: C_EXT /* string, string */ | C_BASE | C_PATH | ELEMENT /* int, list | int, string */ | FGETS /* list fgets(string, int) */ | FIELDS /* string, string */ | STRFIND /* string, string */ ; three_arg_funs: SUBSTR ; type_of_var: vartype { parse_error = err_identifier_expected; vartype = $1.type; } ; /* U */ /* V */ vardefs: varnames semicol { $$ = $1; /* initialization code */ } ; varnames: varnames comma idexpr { $$ = *catcode(&$1, &$3); /* catenate variable */ /* initialization code */ } | idexpr | error ok zeroframe /* Empty stmnt */ { $$ = $3; } ; var_or_fun: vardefs { global_init = *catcode(&global_init, &$1); } | funcdef ; vartype: INT | STRINGTYPE | LIST ; voidtype: VOID { vartype = 0; } ; /* W */ while: WHILE nesting ; while_stat: while openpar err_expression closepar break_ok statement popdead { $$ = *while_stmnt(&$3, &$6); } ; /* X */ /* Y */ /* Z */ zero_arg_funs: GETCH | GETPID | GETS ; zeroframe: { $$ = stackframe(0); } ; %% int yywrap() { return (1); } bisonc++-4.13.01/documentation/regression/icmake2/parser/grammar0000644000175000017500000005132612633316117023472 0ustar frankfrank%token ARG_HEAD ARG_TAIL ASCII BREAK CHDIR CMD_HEAD CMD_TAIL C_BASE C_EXT C_PATH G_BASE G_EXT G_PATH ELEMENT ELSE EXEC EXECUTE EXISTS EXIT FGETS FIELDS FOR FPRINTF GETENV GETCH GETPID GETS IDENTIFIER IF INT LIST MAKELIST M_ECHO NUMBER PRINTF PUTENV RETURN SIZEOFLIST STAT STRING STRINGTYPE STRLEN STRLWR STRUPR STRFIND SUBSTR SYSTEM VOID WHILE %right '=' AND_IS /* binary-assignment */ OR_IS XOR_IS SHL_IS SHR_IS DIV_IS /* arithmetic assignment */ MINUS_IS MUL_IS MOD_IS PLUS_IS %left OR %left AND %left '|' %left '^' %left '&' %left EQUAL NOT_EQUAL %left '<' '>' SMALLER_EQUAL GREATER_EQUAL OLDER YOUNGER %left SHL SHR %left '+' '-' %left '*' '/' '%' %right '!' INC DEC '~' %left '[' %expect 1 /* Grammar Rules */ %% input: input def_var_or_fun | def_var_or_fun ; /* A */ args: args comma err_expression { $$ = *multargs(&$1, &$3); } | err_expression { $$ = *firstarg(&$1); } ; /* B */ break_ok: { break_ok++; } ; break_stat: BREAK { $$ = *break_stmnt(); } ; /* C */ casttype: INT | LIST | STRINGTYPE ; backtick: {parse_error = err_backtick_expected; } '`' ; closebrace: {parse_error = err_closebrace_expected; } '}' ; closepar: {parse_error = err_closepar_expected; } ')' ; comma: {parse_error = err_comma_expected; } ',' ; comma_arglist: ',' args { $$ = $2; } | zeroframe ; comma_expr: ',' err_expression { $$ = $2; } | zeroframe ; compound: '{' /* } (for matching) */ statements closebrace { $$ = $2; } ; /* D */ def_var_or_fun: opt_vartype var_or_fun | voidtype funcdef ; /* E */ else_tail: ELSE statement { $$ = $2; } | zeroframe ; enterid: IDENTIFIER { entervar(); } ; entervarid: enterid { $$ = fetchvar(); } ; err_expression: { parse_error = err_in_expression; } expression { $$ = $2; } ; expression: expression '=' expression { $$ = *assign(&$1, &$3); } | expression '[' expression ']' { $$ = *indexOp(&$1, &$3); } | expression MUL_IS expression { $$ = *math_ass(&$1, &$3, multiply, "*="); } | expression DIV_IS expression { $$ = *math_ass(&$1, &$3, divide, "/="); } | expression MOD_IS expression { $$ = *math_ass(&$1, &$3, modulo, "%="); } | expression PLUS_IS expression { $$ = *math_ass(&$1, &$3, addition, "+="); } | expression MINUS_IS expression { $$ = *math_ass(&$1, &$3, subtract, "-="); } | expression AND_IS expression { $$ = *math_ass(&$1, &$3, band, "&="); } | expression OR_IS expression { $$ = *math_ass(&$1, &$3, bor, "|="); } | expression XOR_IS expression { $$ = *math_ass(&$1, &$3, xor, "^="); } | expression SHL_IS expression { $$ = *math_ass(&$1, &$3, shl, "<<="); } | expression SHR_IS expression { $$ = *math_ass(&$1, &$3, shr, ">>="); } | expression OR expression { $$ = *or_boolean(&$1, &$3); } | expression AND expression { $$ = *and_boolean(&$1, &$3); } | expression EQUAL expression { $$ = *equal(&$1, &$3); } | expression NOT_EQUAL expression { $$ = *unequal(&$1, &$3); } | expression '<' expression { $$ = *smaller(&$1, &$3); } | expression '>' expression { $$ = *greater(&$1, &$3); } | expression SMALLER_EQUAL expression { $$ = *sm_equal(&$1, &$3); } | expression GREATER_EQUAL expression { $$ = *gr_equal(&$1, &$3); } | expression '+' expression { $$ = *addition(&$1, &$3); } | expression '&' expression { $$ = *band(&$1, &$3); } | expression '|' expression { $$ = *bor(&$1, &$3); } | expression '^' expression { $$ = *xor(&$1, &$3); } | expression SHL expression { $$ = *shl(&$1, &$3); } | expression SHR expression { $$ = *shr(&$1, &$3); } | expression '-' expression { $$ = *subtract(&$1, &$3); } | expression '*' expression { $$ = *multiply(&$1, &$3); } | expression YOUNGER expression { $$ = *young(&$1, &$3); } | expression OLDER expression { $$ = *old(&$1, &$3); } | expression '/' expression { $$ = *divide(&$1, &$3); } | expression '%' expression { $$ = *modulo(&$1, &$3); } | '-' expression %prec '!' { $$ = *negate(&$2); } | INC expression { $$ = *incdec(pre_op, op_inc, &$2); } | expression INC { $$ = *incdec(post_op, op_inc, &$1); } | DEC expression { $$ = *incdec(pre_op, op_dec, &$2); } | expression DEC { $$ = *incdec(post_op, op_dec, &$1); } | '+' expression %prec '!' { $$ = $2; } | '~' expression %prec '!' { $$ = *bnot(&$2); } | '!' expression { $$ = *not_boolean(&$2); } | '(' casttype ')' expression %prec '!' { $$ = *cast($2.type, &$4); } | string { $$ = stackframe(e_str | e_const); } | NUMBER { $$ = stackframe(e_int | e_const); } | '(' expression closepar { $$ = $2; } | func_or_var | '`' expression backtick { $$ = *onearg(f_backtick, &$2); } ; expr_code: err_expression { $$ = *expr_stmnt(&$1); } ; expr_list: expr_list ',' expr_code { $$ = *catcode(&$1, &$3); } | expr_code ; /* F */ for: FOR nesting ; for_stat: for openpar opt_expr_list semicol opt_expression semicol opt_expr_list closepar break_ok statement popdead { $$ = *for_stmnt(&$3, &$5, &$7, &$10); } ; funcdef: funid funvars /* returns init code */ statements closebrace { close_fun(&$3); } ; func_or_var: function closepar | IDENTIFIER { $$ = fetchvar(); } ; function: zero_arg_funs /* getch() or gets() */ openpar { $$ = *zeroargs($1.type); } | one_arg_funs openpar err_expression { $$ = *onearg($1.type, &$3); } | two_arg_funs openpar err_expression comma err_expression { $$ = *twoargs($1.type, &$3, &$5); } | three_arg_funs openpar err_expression comma err_expression comma err_expression { $$ = *threeargs($1.type, &$3, &$5, &$7); } | optint_string /* CHDIR, SYSTEM, STAT */ openpar err_expression /* int inserted if string */ comma_expr /* may be string if first == int */ { $$ = *optint_string($1.type, &$3, &$4); } | optint_special /* EXEC, EXECUTE */ openpar /* alternatives: */ err_expression /* fun(int, string, ...) */ comma_arglist /* fun(string, ...) */ { $$ = *optint_special($1.type, &$3, &$4); } | PRINTF openpar args /* first may be anything */ { $$ = *specials(f_printf, &$3); } | FPRINTF openpar args /* argcount >= 2 required */ { $$ = *exec_fprintf($1.type, &$3); } | funname openpar opt_arglist { $$ = *callfun($1.evalue, &$3); } | makelist ; funid: IDENTIFIER { open_fun(); } ; funname: IDENTIFIER { $$.evalue = fetchfun(); } ; funvars: openpar opt_parlist ')' openbrace opt_locals { make_frame(); outbin($5.code, $5.codelen); } ; /* G */ /* H */ /* I */ idexpr: enterid zeroframe /* no explicit initialization */ | entervarid initassign expression { initialization = 0; $$ = *expr_stmnt(assign(&$1, &$3)); /* explicit initialization */ } ; if: IF nesting ; if_stat: if openpar err_expression closepar statement popdead pushdead else_tail popdead { $$ = *if_stmnt(&$3, &$5, &$8); } ; initassign: '=' { initialization = 1; } ; /* J */ /* K */ /* L */ leave_key: RETURN | EXIT ; local_list: type_of_var vardefs /* + semicol, initialization code */ { $$ = $2; } ; locals: locals local_list /* type + variables */ { $$ = *catcode(&$1, &$2); /* cat initialization code */ } | local_list /* initialization code of 1st var */ ; /* M */ makelist: /* makelist(expr) */ makelist_expr makelist_normal /* returns O_FILE expression */ { $$ = *makelist ( multargs ( firstarg(&$2), /* O_FILE is passed */ &$1 /* expression is passed */ ), op_hlt /* not op_younger or op_older */ ); } | /* makelist(expr, expr) */ makelist_expr comma err_expression { $$ = *makelist ( multargs ( firstarg(&$1), /* fileattribute is passed */ &$3 /* expression is passed */ ), op_hlt /* not op_younger or op_older */ ); } | makelist_expr /* makelist(expr, older, expr) */ comma older_younger comma err_expression makelist_normal { $$ = *makelist ( multargs ( multargs ( firstarg(&$6), /* O_FILE is passed */ &$1 /* 1st expression is passed */ ), &$5 /* 2nd expression is passed */ ), $3.type /* older/younger */ ); } | makelist_expr /* makelist(expr, expr, older, expr) */ comma err_expression comma older_younger comma err_expression { $$ = *makelist ( multargs ( multargs ( firstarg(&$1), /* attribute is passed */ &$3 /* 2nd expression is passed */ ), &$7 /* 3rd expression is passed */ ), $5.type /* older/younger */ ); } ; makelist_expr: MAKELIST openpar err_expression { $$ = $3; } ; makelist_normal: { $$ = stackframe(e_int | e_const); $$.evalue = O_FILE; } ; /* N */ nesting: pushdead { nestlevel++; } ; /* O */ ok: ';' { yyerrok; } ; older_younger: {parse_error = err_older_younger; } old_young { $$ = $2; } ; old_young: OLDER | YOUNGER ; one_arg_funs: ASCII | SIZEOFLIST | EXISTS | M_ECHO | CMD_TAIL | CMD_HEAD | ARG_HEAD | ARG_TAIL | G_BASE | G_PATH | G_EXT | PUTENV | GETENV | STRLEN | STRUPR | STRLWR ; openpar: {parse_error = err_openpar_expected; } '(' ; openbrace: {parse_error = err_openbrace_expected; } '{' ; /* } for matching */ opt_arglist: args | zeroframe ; opt_expression: err_expression | { $$ = stackframe(e_int | e_const); $$.evalue = 1; } ; opt_expr_list: expr_list | zeroframe ; optint_special: EXEC /* optional int allowed */ | EXECUTE ; optint_string: STAT | CHDIR | SYSTEM ; opt_locals: locals /* initialization code */ | zeroframe /* empty init. code */ ; opt_parlist: pars | ; opt_vartype: type_of_var | { vartype = e_int; } ; /* P */ pars: pars comma partype | partype ; partype: type_of_var enterid { n_params++; } ; popdead: { pop_dead(); } ; pushdead: { push_dead(); /* set new dead-level */ } ; /* Q */ /* R */ return_stat: leave_key return_tail { $$ = *return_stmnt($1.type, &$2); } ; return_tail: err_expression | zeroframe ; /* S */ semicol: {parse_error = err_semicol_expected; } ';' ; statement: stm { sem_err = 0; } ; statements: statements statement { $$ = *cat_stmnt(&$1, &$2); } | zeroframe ; stm: compound | ';' zeroframe { $$ = $1; } | expr_code semicol | while_stat | if_stat | for_stat | return_stat semicol | break_stat semicol | error ok ; string: string STRING { stringbuf = xstrcat(stringbuf, string);/* catenate the new string */ } | STRING { free(stringbuf); /* free former string */ stringbuf = xstrdup(string); /* duplicate initial string */ } ; /* T */ two_arg_funs: C_EXT /* string, string */ | C_BASE | C_PATH | ELEMENT /* int, list | int, string */ | FGETS /* list fgets(string, int) */ | FIELDS /* string, string */ | STRFIND /* string, string */ ; three_arg_funs: SUBSTR ; type_of_var: vartype { parse_error = err_identifier_expected; vartype = $1.type; } ; /* U */ /* V */ vardefs: varnames semicol { $$ = $1; /* initialization code */ } ; varnames: varnames comma idexpr { $$ = *catcode(&$1, &$3); /* catenate variable */ /* initialization code */ } | idexpr | error ok zeroframe /* Empty stmnt */ { $$ = $3; } ; var_or_fun: vardefs { global_init = *catcode(&global_init, &$1); } | funcdef ; vartype: INT | STRINGTYPE | LIST ; voidtype: VOID { vartype = 0; } ; /* W */ while: WHILE nesting ; while_stat: while openpar err_expression closepar break_ok statement popdead { $$ = *while_stmnt(&$3, &$6); } ; /* X */ /* Y */ /* Z */ zero_arg_funs: GETCH | GETPID | GETS ; zeroframe: { $$ = stackframe(0); } ; %% int yywrap() { return (1); } bisonc++-4.13.01/documentation/regression/icmake2/doc0000644000175000017500000000011112633316117021277 0ustar frankfrankThe full icmake V 7.00 grammar, on which bisonc++ before V 1.5.0 choked. bisonc++-4.13.01/documentation/regression/simplecalc/0000755000175000017500000000000012633316117021417 5ustar frankfrankbisonc++-4.13.01/documentation/regression/simplecalc/scanner/0000755000175000017500000000000012633316117023050 5ustar frankfrankbisonc++-4.13.01/documentation/regression/simplecalc/scanner/lexer0000644000175000017500000000030712633316117024112 0ustar frankfrank%interactive %filenames scanner %% [[:space:]]+ // skip white space [0-9]+ return Parser::NUMBER; . return matched()[0]; bisonc++-4.13.01/documentation/regression/simplecalc/scanner/scanner.ih0000644000175000017500000000007012633316117025020 0ustar frankfrank#include "scanner.h" #include "../parser/parserbase.h" bisonc++-4.13.01/documentation/regression/simplecalc/scanner/scanner.h0000644000175000017500000000203612633316117024653 0ustar frankfrank// Generated by Flexc++ V0.93.00 on Mon, 20 Feb 2012 11:05:47 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { public: explicit Scanner(std::istream &in = std::cin, std::ostream &out = std::cout); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; // $insert scannerConstructors inline Scanner::Scanner(std::istream &in, std::ostream &out) : ScannerBase(in, out) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::postCode(PostEnum__) {} inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/regression/simplecalc/parser/0000755000175000017500000000000012633316117022713 5ustar frankfrankbisonc++-4.13.01/documentation/regression/simplecalc/parser/bgram0000644000175000017500000000071412633316117023730 0ustar frankfrank%token NUMBER %% lines: lines lineprompt | prompt ; lineprompt: line prompt ; line: expression | 'q' { std::cout << "Done\n"; ACCEPT(); } | error ; expression: number '+' number '=' { std::cout << $1 << " + " << $3 << " = " << $1 + $3 << "\n"; } ; number: NUMBER { $$ = atoi(d_scanner.YYText()); } ; prompt: { std::cout << "? "; } ; bisonc++-4.13.01/documentation/regression/simplecalc/parser/parser.h0000644000175000017500000000216612633316117024365 0ustar frankfrank// Generated by Bisonc++ V2.4.1 on Thu Dec 20 13:17:46 2007 +0100 #ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #undef Parser class Parser: public ParserBase { // $insert scannerobject Scanner d_scanner; public: int parse(); private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); }; inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::print() // use d_token, d_loc { print__(); } #endif bisonc++-4.13.01/documentation/regression/simplecalc/parser/grammar0000644000175000017500000000105112633316117024261 0ustar frankfrank%filenames parser %scanner ../scanner/scanner.h %token NUMBER %% lines: lines lineprompt | prompt ; lineprompt: line prompt ; line: expression | 'q' { std::cout << "Done\n"; ACCEPT(); } | error ; expression: number '+' number '=' { std::cout << " " << $1 << " + " << $3 << " = " << $1 + $3 << "\n"; } ; number: NUMBER { $$ = atoi(d_scanner.matched().c_str()); } ; prompt: { std::cout << "? "; } ; bisonc++-4.13.01/documentation/regression/simplecalc/parser/parser.ih0000644000175000017500000000102612633316117024530 0ustar frankfrank// Generated by Bisonc++ V2.4.1 on Thu Dec 20 13:17:46 2007 +0100 // Include this file in the sources of the class Parser. // $insert class.h #include "parser.h" // Add below here any includes etc. that are only // required for the compilation of Parser's sources. #include // UN-comment the next using-declaration if you want to use // symbols from the namespace std without specifying std:: //using namespace std; inline void Parser::exceptionHandler__(std::exception const &exc) { throw; } bisonc++-4.13.01/documentation/regression/simplecalc/doc0000644000175000017500000000037312633316117022112 0ustar frankfrankThis simple calculator adds two integral values. Each expression must be terminated by =, for example: 3 + 5 = To end the program, enter q Error recovery is available. When making a syntax error the parser allows you to enter another expressionbisonc++-4.13.01/documentation/regression/simplecalc/demo.cc0000644000175000017500000000112012633316117022644 0ustar frankfrank/* demo.cc */ #include #include "parser/parser.h" using namespace std; int main(int argc, char **argv) { cout << "Enter lines containing NR + NR =\n" "The numbers should be integral numbers.\n" "Other non-empty lines should result in a `syntax error'\n" "Blanks and tabs are ignored. Use q, ^c or ^d to end the program\n" "Use any program argument to view parser debug output\n"; Parser parser; parser.setDebug(argc > 1); cout << "Parser returns " << parser.parse() << endl; return 0; } bisonc++-4.13.01/documentation/regression/run0000755000175000017500000001246112633316117020041 0ustar frankfrank#!/bin/bash # Modifiy the COMMAND variable to your taste, if your shell isn't # mentioned. By specifying your COMMAND-shell last it will be used; COMMAND="/usr/bin/tcsh -f" #COMMAND=/bin/bash # Assuming bisonc++ is in your computer's search-path. If not, define # BISONCPP as the full path to bisconc++: #BISONCPP=/home/frank/git/bisonc++/src/bisonc++/tmp/bin/binary BISONCPP=bisonc++ # UNCOMMENT the following variables if you want to run the examples from # the source distribution's documentation/regression directory rather than # from bisonc++ documentation's `regression' subdirectory. SKEL=../../../../skeletons BISONCPP="../../../../tmp/bin/binary -S ${SKEL}" example() { let EXAMPLE=${EXAMPLE}+1 orgdir=`pwd` echo case $2 in ("go") ;; (*) return 0 ;; esac cd $1 cwd=`pwd` echo -------------------------------- echo cat doc echo echo '(waiting for the compilation to complete ...)' echo -------------------------------- cd parser echo $BISONCPP --construction $3 grammar $BISONCPP --construction --debug $3 grammar [ -e /usr/bin/bison -a -e bgram ] && bison -v bgram echo if [ -s ../demo.cc ] then cd ../scanner flexc++ lexer cd .. g++ --std=c++0x -Wall -o demo *.cc */*.cc echo "ENTERING A SHELL: \`demo' runs the program, use \`exit' to return" echo " (the grammar analysis is in the \`parser' subdirectory)" else echo "ENTERING A SHELL: Inspect the results, use \`exit' to return" fi echo "bison's output is in \`bgram.output', bisonc++'s output in \`grammar.output'" echo $COMMAND cd $cwd # the doc-test is a safequard agains accidentally removing files [ -s doc ] && \ find ./ -type f \ \( \ -name Parser*h -or \ -name parserbase.h -or \ -name scannerbase.h \ \) \ -exec rm '{}' ';' && \ find ./ -type f \ -not -regex '.*/_.*' \ -not -name doc \ -not -name demo.cc \ -not -name bgram \ -not -name grammar \ -not -regex '.*/*h' \ -not -regex '.*/dallas.*' \ -not -name input \ -not -name lexer \ -exec rm '{}' ';' cd $orgdir tput clear } readRUN() { read RUN if [ "$RUN" == "" ] then RUN=go else RUN=skip tput clear fi } tput clear echo " This script feeds several grammars to bisonc++. Some grammars allow you to execute a little demo-program. Some examples do not have demo programs. All grammars are also fed to bison \(if existing\), producing their output on a file \`bgram.output' Bisonc++'s output is provided in the file \`grammar.output' From the various test/parser directories, bisonc++ should be accessible as $BISONCPP If that's not true for you, consider changing the BISONCPP variable in this script. With each example, hitting a plain Enter creates the parser and optionally builds the demo-program Note that bison always defines one additional state compared with bisonc++. Bison accepts its input in a separate state, whereas bisonc++ accepts when is seen in combination with the reduction of the the augmented grammar rule G* -> G . Bisonc++ will not execute an action here, but that should be ok, since the grammar specification does not make G* -> G visible, so no action can be associated with its reduction anyway. [press Enter to continue] " read RUN tput clear echo EXAMPLE=1 PRE="Enter x to skip; a plain Enter to run; ^c to end this script" echo $EXAMPLE: AHO Example 4.42, p. 231 echo $PRE readRUN example aho4.42 $RUN echo $EXAMPLE: two R/R conflicts echo $PRE readRUN example rr2 $RUN echo $EXAMPLE: the dangling-else conflict echo $PRE readRUN example danglingelse $RUN echo $EXAMPLE: S/R and R/R conflicts echo $PRE readRUN example conflicts $RUN echo $EXAMPLE: not derivable sentence echo $PRE readRUN example nosentence $RUN echo $EXAMPLE: a reduced icmake V 7.00 grammar echo $PRE readRUN example icmake1 $RUN echo $EXAMPLE: the full icmake V 7.00 grammar echo $PRE readRUN example icmake2 $RUN echo $EXAMPLE: using an error-production echo $PRE readRUN example error $RUN echo "$EXAMPLE: Simple ;-separated list of numbers and error recovery" echo $PRE readRUN example naive $RUN echo $EXAMPLE: adding two integral values echo $PRE readRUN example simplecalc $RUN echo $EXAMPLE: using the location stack echo $PRE readRUN example location $RUN echo $EXAMPLE: the man-page calculator echo $PRE readRUN example calculator $RUN echo $EXAMPLE: a calculator from the C++ Annotations echo $PRE readRUN example annotations $RUN echo $EXAMPLE: an extensive calculator supporting functions echo $PRE readRUN example fun $RUN echo $EXAMPLE: an example of polymorphic semantic values echo $PRE readRUN example polymorphic $RUN --insert-stype echo $EXAMPLE: a grammar in which reduces precede shifts echo $PRE readRUN example mandayam $RUN tput clear echo " END OF SCRIPT " bisonc++-4.13.01/documentation/regression/notused/0000755000175000017500000000000012633316117020764 5ustar frankfrankbisonc++-4.13.01/documentation/regression/notused/parser/0000755000175000017500000000000012633316117022260 5ustar frankfrankbisonc++-4.13.01/documentation/regression/notused/parser/grammar0000644000175000017500000000021512633316117023627 0ustar frankfrank%token TOKEN_ONE TOKEN_TWO %% start: one | '+' ; one: '-' {} ; unused_one: TOKEN_ONE ; unused_two: TOKEN_TWO ; bisonc++-4.13.01/documentation/regression/notused/doc0000644000175000017500000000012112633316117021446 0ustar frankfrank This grammar contains 2 unused non-terminals and 2 unused terminal symbols. bisonc++-4.13.01/documentation/regression/duplicate/0000755000175000017500000000000012633316117021255 5ustar frankfrankbisonc++-4.13.01/documentation/regression/duplicate/parser/0000755000175000017500000000000012633316117022551 5ustar frankfrankbisonc++-4.13.01/documentation/regression/duplicate/parser/grammar0000644000175000017500000000005012633316117024115 0ustar frankfrank%token NR %% start: NR | NR ; bisonc++-4.13.01/documentation/regression/annotations/0000755000175000017500000000000012633316117021640 5ustar frankfrankbisonc++-4.13.01/documentation/regression/annotations/scanner/0000755000175000017500000000000012633316117023271 5ustar frankfrankbisonc++-4.13.01/documentation/regression/annotations/scanner/lexer0000644000175000017500000000036712633316117024341 0ustar frankfrank%filenames scanner %interactive %% [ \t] ; [0-9]+ return Parser::INT; "."[0-9]* | [0-9]+("."[0-9]*)? return Parser::DOUBLE; .|\n return matched()[0]; bisonc++-4.13.01/documentation/regression/annotations/scanner/scanner.ih0000644000175000017500000000007012633316117025241 0ustar frankfrank#include "scanner.h" #include "../parser/parserbase.h" bisonc++-4.13.01/documentation/regression/annotations/scanner/scanner.h0000644000175000017500000000203512633316117025073 0ustar frankfrank// Generated by Flexc++ V0.93.00 on Mon, 20 Feb 2012 12:50:45 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { public: explicit Scanner(std::istream &in = std::cin, std::ostream &out = std::cout); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; inline void Scanner::postCode(PostEnum__) {} // $insert scannerConstructors inline Scanner::Scanner(std::istream &in, std::ostream &out) : ScannerBase(in, out) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/regression/annotations/parser/0000755000175000017500000000000012633316117023134 5ustar frankfrankbisonc++-4.13.01/documentation/regression/annotations/parser/_display2.cc0000644000175000017500000000024312633316117025330 0ustar frankfrank#include "parser.ih" void Parser::display(double x) { cerr << "RPN: " << d_rpn.str() << endl; cerr << "double: " << x << endl; d_rpn.str(string()); } bisonc++-4.13.01/documentation/regression/annotations/parser/bgram0000644000175000017500000000343312633316117024152 0ustar frankfrank %union { int i; double d; }; %token INT DOUBLE %type intExpr %type doubleExpr %left '+' %left '*' %right UnaryMinus //= %% //RULES lines: lines line | line ; line: intExpr '\n' { display($1); } | doubleExpr '\n' { display($1); } | '\n' { done(); } | error '\n' { reset(); } ; intExpr: intExpr '*' intExpr { $$ = exec('*', $1, $3); } | intExpr '+' intExpr { $$ = exec('+', $1, $3); } | '(' intExpr ')' { $$ = $2; } | '-' intExpr %prec UnaryMinus { $$ = neg($2); } | INT { $$ = convert(); } ; doubleExpr: doubleExpr '*' doubleExpr { $$ = exec('*', $1, $3); } | doubleExpr '*' intExpr { $$ = exec('*', $1, d($3)); } | intExpr '*' doubleExpr { $$ = exec('*', d($1), $3); } | doubleExpr '+' doubleExpr { $$ = exec('+', $1, $3); } | doubleExpr '+' intExpr { $$ = exec('+', $1, d($3)); } | intExpr '+' doubleExpr { $$ = exec('+', d($1), $3); } | '(' doubleExpr ')' { $$ = $2; } | '-' doubleExpr %prec UnaryMinus { $$ = neg($2); } | DOUBLE { $$ = convert(); } ; //= bisonc++-4.13.01/documentation/regression/annotations/parser/parser.h0000644000175000017500000000322012633316117024576 0ustar frankfrank// Generated by Bisonc++ V4.10.00 on Mon, 27 Apr 2015 13:18:19 +0200 #ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #include "../_a2x.h" #undef Parser class Parser: public ParserBase { std::ostringstream d_rpn; // $insert scannerobject Scanner d_scanner; public: int parse(); private: void display(int x); void display(double x); void done() const; static double d(int i) { return i; } template Type exec(char c, Type left, Type right) { d_rpn << " " << c << " "; return c == '*' ? left * right : left + right; } template Type neg(Type op) { d_rpn << " n "; return -op; } template Type convert() { Type ret = A2x(d_scanner.matched()); d_rpn << " " << ret << " "; return ret; } void reset(); void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); }; #endif bisonc++-4.13.01/documentation/regression/annotations/parser/grammar0000644000175000017500000000353112633316117024507 0ustar frankfrank//DECLARATION %filenames parser %scanner ../scanner/scanner.h %union { int i; double d; }; %token INT DOUBLE %type intExpr %type doubleExpr %left '+' %left '*' %right UnaryMinus //= %% //RULES lines: lines line | line ; line: intExpr '\n' { display($1); } | doubleExpr '\n' { display($1); } | '\n' { done(); } | error '\n' { reset(); } ; intExpr: intExpr '*' intExpr { $$ = exec('*', $1, $3); } | intExpr '+' intExpr { $$ = exec('+', $1, $3); } | '(' intExpr ')' { $$ = $2; } | '-' intExpr %prec UnaryMinus { $$ = neg($2); } | INT { $$ = convert(); } ; doubleExpr: doubleExpr '*' doubleExpr { $$ = exec('*', $1, $3); } | doubleExpr '*' intExpr { $$ = exec('*', $1, d($3)); } | intExpr '*' doubleExpr { $$ = exec('*', d($1), $3); } | doubleExpr '+' doubleExpr { $$ = exec('+', $1, $3); } | doubleExpr '+' intExpr { $$ = exec('+', $1, d($3)); } | intExpr '+' doubleExpr { $$ = exec('+', d($1), $3); } | '(' doubleExpr ')' { $$ = $2; } | '-' doubleExpr %prec UnaryMinus { $$ = neg($2); } | DOUBLE { $$ = convert(); } ; //= bisonc++-4.13.01/documentation/regression/annotations/parser/parser.ih0000644000175000017500000000164312633316117024756 0ustar frankfrank// Generated by Bisonc++ V4.10.00 on Mon, 27 Apr 2015 13:18:19 +0200 // Include this file in the sources of the class Parser. // $insert class.h #include "parser.h" inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::print() { print__(); // displays tokens if --print was specified } inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } // Add here includes that are only required for the compilation // of Parser's sources. // UN-comment the next using-declaration if you want to use // int Parser's sources symbols from the namespace std without // specifying std:: #include #include #include using namespace std; bisonc++-4.13.01/documentation/regression/annotations/parser/_reset.cc0000644000175000017500000000013412633316117024722 0ustar frankfrank#include "parser.ih" void Parser::reset() { d_rpn.clear(); d_rpn.str(string()); } bisonc++-4.13.01/documentation/regression/annotations/parser/_done.cc0000644000175000017500000000013412633316117024525 0ustar frankfrank#include "parser.ih" void Parser::done() const { cout << "Good bye\n"; ACCEPT(); } bisonc++-4.13.01/documentation/regression/annotations/parser/_display1.cc0000644000175000017500000000023512633316117025330 0ustar frankfrank#include "parser.ih" void Parser::display(int x) { cerr << "RPN: " << d_rpn.str() << endl; cerr << "int: " << x << endl; d_rpn.str(string()); } bisonc++-4.13.01/documentation/regression/annotations/doc0000644000175000017500000000126412633316117022333 0ustar frankfrankThis example is found in the C++ Annotations, available from http://www.icce.rug.nl/documents/ The example defines a calculator accepting mixed-expressions (int and double operands). It focuses on the mixed-type operands, and converts the expressions to Reversed Polish Notation (HP-calculator type) expressions. Only the +, the *, the unary - and nested expressions are implemented. Expression values are printed as cut-off integral values. Internally, double-arithmetic is used. Enter one expression per line. Error recovery is provided by skipping all information on one line when a syntax error is encountered. Enter an empty line to terminate the program. bisonc++-4.13.01/documentation/regression/annotations/_a2x.h0000644000175000017500000000251212633316117022642 0ustar frankfrank#ifndef _INCLUDED_BOBCAT_A2X_ #define _INCLUDED_BOBCAT_A2X_ #include #include class A2x: public std::istringstream { static bool s_lastFail; public: A2x(); A2x(char const *txt); // initialize from text A2x(std::string const &str); A2x(A2x const &other); template operator Type(); template Type to(); A2x &operator=(char const *txt); A2x &operator=(std::string const &str); A2x &operator=(A2x const &other); static bool lastFail(); }; inline A2x::A2x() {} inline A2x::A2x(char const *txt) // initialize from text : std::istringstream(txt) {} inline A2x::A2x(std::string const &str) : std::istringstream(str.c_str()) {} inline A2x::A2x(A2x const &other) : std::istringstream(other.str()) {} template inline Type A2x::to() { Type t; return (s_lastFail = (*this >> t).fail()) ? Type() : t; } template inline A2x::operator Type() { return to(); } inline A2x &A2x::operator=(std::string const &str) { return operator=(str.c_str()); } inline A2x &A2x::operator=(A2x const &other) { return operator=(other.str()); } inline bool A2x::lastFail() { return s_lastFail; } #endif bisonc++-4.13.01/documentation/regression/annotations/_data.cc0000644000175000017500000000000112633316117023206 0ustar frankfrank bisonc++-4.13.01/documentation/regression/annotations/demo.cc0000644000175000017500000000126312633316117023075 0ustar frankfrank#include "parser/parser.h" bool A2x::s_lastFail = false; using namespace std; int main(int argc, char **argv) { cout << "Enter (nested) expressions ONLY consisting of *, + and " "unary - operators.\n" "The expressions are evaluated and converted to RPN notation\n" "Use int or double operands. Blanks and tabs are ignored.\n" "Types are propagated over expressions, but all calculations are\n" "based on int values. An empty line ends the program.\n" "Use any program argument to view parser debug output\n"; Parser parser; parser.setDebug(argc > 1); return parser.parse(); } bisonc++-4.13.01/documentation/regression/location/0000755000175000017500000000000012633316117021113 5ustar frankfrankbisonc++-4.13.01/documentation/regression/location/scanner/0000755000175000017500000000000012633316117022544 5ustar frankfrankbisonc++-4.13.01/documentation/regression/location/scanner/lexer0000644000175000017500000000075312633316117023613 0ustar frankfrank%interactive %filenames scanner %% [ \t]+ // Often used: skip white space [0-9]+ { *d_val = atoi(matched().c_str()); return Parser::NR; } .|\n { d_loc->first_line = lineNr() - 1; return matched()[0]; } bisonc++-4.13.01/documentation/regression/location/scanner/scanner.ih0000644000175000017500000000007112633316117024515 0ustar frankfrank#include "../parser/Parserbase.h" #include "scanner.h" bisonc++-4.13.01/documentation/regression/location/scanner/scanner.h0000644000175000017500000000212312633316117024344 0ustar frankfrank// Generated by Flexc++ V0.93.00 on Mon, 20 Feb 2012 11:31:55 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" #include "../parser/Parserbase.h" // $insert classHead class Scanner: public ScannerBase { Parser::LTYPE__ *d_loc; Parser::STYPE__ *d_val; public: Scanner(Parser::LTYPE__ *loc, Parser::STYPE__ *val); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; inline void Scanner::postCode(PostEnum__) {} inline Scanner::Scanner(Parser::LTYPE__ *loc, Parser::STYPE__ *val) : ScannerBase(std::cin, std::cout), d_loc(loc), d_val(val) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/regression/location/parser/0000755000175000017500000000000012633316117022407 5ustar frankfrankbisonc++-4.13.01/documentation/regression/location/parser/bgram0000644000175000017500000000055612633316117023430 0ustar frankfrank %token NR %% lines: lines line // 1 | // 2 ; line: content // 3 '\n' { std::cout << "Line token newline at " << @2.first_line << "\n"; } ; content: expr // 4 | error // 5 | // empty ; expr: NR // 7 { std::cout << "NR returns value " << $1 << "\n"; } ; bisonc++-4.13.01/documentation/regression/location/parser/parser.h0000644000175000017500000000272012633316117024055 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // this file was originally generated by bisonc++, then modified and renamed // to parser.h so that it was kept when the `run' script was executed. When // executing `run', compare the contents of this file and those of Parser.h, // to see what local modifications were made: only the Parser constructor was // added (and this comment, of course) // $insert scanner.h #include "../scanner/scanner.h" // $insert baseclass #include "Parserbase.h" #undef Parser class Parser: public ParserBase { // $insert scannerobject Scanner d_scanner; public: Parser() : d_scanner(&d_loc__, &d_val__) {} int parse(); private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); }; inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::print() { print__(); // displays tokens if --print was specified } #endif bisonc++-4.13.01/documentation/regression/location/parser/grammar0000644000175000017500000000064412633316117023764 0ustar frankfrank%scanner ../scanner/scanner.h %lsp-needed %token NR %% lines: lines line // 1 | line // 2 ; line: content // 3 '\n' { std::cout << " Line token newline at " << @2.first_line << "\n"; } ; content: expr // 4 | error // 5 | // empty ; expr: NR // 7 { std::cout << " NR returns value " << $1 << "\n"; } ; bisonc++-4.13.01/documentation/regression/location/doc0000644000175000017500000000127012633316117021603 0ustar frankfrankThis example shows the use of the standard location stack as well as passing d_val to the scanner. The use of the standard location stack is indicated so by the %lsp-needed directive. In this case, the %scanner directive can be used too, but the parser probably needs a constructor passing d_loc and d_val to the scanner. In the example this is accomplished using the following in-line Parser constructor: Parser() : d_scanner(&d_loc, &d_val) {} The grammar itself is trivial. It only accepts empty lines or lines containing one integral numerical value. Any other input results in an error (which is handled). To terminate the program, ^C must be used. bisonc++-4.13.01/documentation/regression/location/demo.cc0000644000175000017500000000075012633316117022350 0ustar frankfrank/* example.cc */ #include #include "parser/parser.h" using namespace std; int main(int argc, char **argv) { cout << "Enter integral numbers or empty lines.\n" "Other lines should result in a `syntax error'\n" "Blanks and tabs are ignored. Use ^c or ^d to end the program\n" "Use any program argument to view parser debug output\n"; Parser parser; parser.setDebug(argc > 1); return parser.parse(); } bisonc++-4.13.01/documentation/man/0000755000175000017500000000000012634617626015710 5ustar frankfrankbisonc++-4.13.01/documentation/man/options0000644000175000017500000005103712633316117017322 0ustar frankfrank Where available, single letter options are listed between parentheses beyond their associated long-option variants. Single letter options require arguments if their associated long options require arguments. Options affecting the class header or implementation header file are ignored if these files already exist. Options accepting a `filename' do not accept path names, i.e., they cannot contain directory separators (tt(/)); options accepting a 'pathname' may contain directory separators. Some options may generate errors. This happens when an option conflicts with the contents of a file which bic() cannot modify (e.g., a parser class header file exists, but doesn't define a name space, but a tt(--namespace) option was provided). To solve the error the offending option could be omitted, the existing file could be removed, or the existing file could be hand-edited according to the option's specification. Note that bic() currently does not handle the opposite error condition: if a previously used option is omitted, then bic() does not detect the inconsistency. In those cases compilation errors may be generated. COMMENT( class header: warn for class-name mismatch warn for not including baseclass-header warn for namespace mismatch if the 'scanner' option was provided: warn for missing "scanner" include-spec warn for missing Scanner d_scanner spec. implementation header: warn for class-name mismatch (in inline defined members) warn for not including the class header warn for namespace mismatch warn for a mismatch in the scanner-token-function name END) itemization( it() loption(analyze-only) (soption(A))nl() Only analyze the grammar. No files are (re)written. This option can be used to test the grammatic correctness of modification `in situ', without overwriting previously generated files. If the grammar contains syntactic errors only syntax analysis is performed. it() lsoption(baseclass-header)(b)(filename)nl() tt(Filename) defines the name of the file to contain the parser's base class. This class defines, e.g., the parser's symbolic tokens. Defaults to the name of the parser class plus the suffix tt(base.h). It is generated, unless otherwise indicated (see tt(--no-baseclass-header) and tt(--dont-rewrite-baseclass-header) below). It is an error if this option is used and an already existing parser class header file does not contain tt(#include "filename"). it() label(PREINCLUDE) lsoption(baseclass-preinclude)(H)(pathname)nl() tt(Pathname) defines the path to the file preincluded in the parser's base-class header. This option is needed in situations where the base class header file refers to types which might not yet be known. E.g., with polymorphic semantic values a tt(std::string) value type might be used. Since the tt(string) header file is not by default included in tt(parserbase.h) we somehow need to inform the compiler about this and possibly other headers. The suggested procedure is to use a pre-include header file declaring the required types. By default `tt(header)' is surrounded by double quotes: tt(#include "header") is used when the option tt(-H header) is specified. When the argument is surrounded by pointed brackets tt(#include
) is included. In the latter case, quotes might be required to escape interpretation by the shell (e.g., using tt(-H '
')). it() lsoption(baseclass-skeleton)(B)(pathname)nl() tt(Pathname) defines the path name to the file containing the skeleton of the parser's base class. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++base.h)). it() lsoption(class-header)(c)(filename)nl() tt(Filename) defines the name of the file to contain the parser class. Defaults to the name of the parser class plus the suffix tt(.h) It is an error if this option is used and an already existing implementation header file does not contain tt(#include "filename"). it() loption(class-name) tt(className) nl() Defines the name of the bf(C++) class that is generated. If neither this option, nor the tt(%class-name) directory is specified, then the default class name (tt(Parser)) is used. It is an error if this option is used and an already existing parser-class header file does not define tt(class `className') and/or if an already existing implementation header file does not define members of the class tt(`className'). it() lsoption(class-skeleton)(C)(pathname)nl() tt(Pathname) defines the path name to the file containing the skeleton of the parser class. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++.h)). it() loption(construction)nl() Details about the construction of the parsing tables are written to the same file as written by the tt(--verbose) option (i.e., tt(.output), where tt() is the input file read by bic(). This information is primarily useful for developers. It augments the information written to the verbose grammar output file, generated by the tt(--verbose) option. it() loption(debug)nl() Provide tt(parse) and its support functions with debugging code, showing the actual parsing process on the standard output stream. When included, the debugging output is active by default, but its activity may be controlled using the tt(setDebug(bool on-off)) member. An tt(#ifdef DEBUG) macro is not supported by bic(). Rerun bic() without the tt(--debug) option to remove the debugging code. it() label(ERRORVERBOSE)loption(error-verbose)nl() When a syntactic error is reported, the generated parse function dumps the parser's state stack to the standard output stream. The stack dump shows on separate lines a stack index followed by the state stored at the indicated stack element. The first stack element is the stack's top element. it() lsoption(filenames)(f)(filename)nl() tt(Filename) is a generic file name that is used for all header files generated by bic(). Options defining specific file names are also available (which then, in turn, overrule the name specified by this option). it() loption(flex)nl() Bic() generates code calling tt(d_scanner.yylex()) to obtain the next lexical token, and calling tt(d_scanner.YYText()) for the matched text, unless overruled by options or directives explicitly defining these functions. By default, the interface defined by bf(flexc++)(1) is used. This option is only interpreted if the tt(--scanner) option or tt(%scanner) directive is also used. it() loption(help) (soption(h))nl() Write basic usage information to the standard output stream and terminate. it() lsoption(implementation-header)(i)(filename)nl() tt(Filename) defines the name of the file to contain the implementation header. It defaults to the name of the generated parser class plus the suffix tt(.hh). The implementation header should contain all directives and declarations em(only) used by the implementations of the parser's member functions. It is the only header file that is included by the source file containing tt(parse)'s implementation. User defined implementation of other class members may use the same convention, thus concentrating all directives and declarations that are required for the compilation of other source files belonging to the parser class in one header file. it() lsoption(implementation-skeleton)(I)(pathname)nl() tt(Pathname) defines the path name to the file containing the skeleton of the implementation header. t defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++.hh)). it() loption(insert-stype)nl() This option is only effective if the tt(debug) option (or tt(%debug) directive) has also been specified. When tt(insert-stype) has been specified the parsing function's debug output also shows selected semantic values. It should only be used if objects or variables of the semantic value type tt(STYPE__) can be inserted into tt(ostreams). it() label(MAXDEPTH) laoption(max-inclusion-depth)(value)nl() Set the maximum number of nested grammar files. Defaults to 10. it() loption(namespace) tt(identifier) nl() Define all of the code generated by bic() in the name space tt(identifier). By default no name space is defined. If this options is used the implementation header is provided with a commented out tt(using namespace) declaration for the specified name space. In addition, the parser and parser base class header files also use the specified namespace to define their include guard directives. It is an error if this option is used and an already existing parser-class header file and/or implementation header file does not define tt(namespace identifier). it() loption(no-baseclass-header)nl() Do not write the file containing the parser class' base class, even if that file doesn't yet exist. By default the file containing the parser's base class is (re)written each time bic() is called. Note that this option should normally be avoided, as the base class defines the symbolic terminal tokens that are returned by the lexical scanner. When the construction of this file is suppressed, modifications of these terminal tokens are not communicated to the lexical scanner. it() loption(no-decoration) (soption(D))nl() Do not include the user-defined actions when generating the parser's tt(parse) member. This effectively generates a parser which merely performs semantic checks, without performing the actions which are normally executed when rules have been matched. This may be useful in situations where a (partially or completely) decorated grammar is reorganized, and the syntactical correctness of the modified grammar must be verified, or in situations where the grammar has already been decorated, but functions which are called from the rules's actions have not yet been impleemented. it() loption(no-lines)nl() Do not put tt(#line) preprocessor directives in the file containing the parser's tt(parse) function. By default the file containing the parser's tt(parse) function also contains tt(#line) preprocessor directives. This option allows the compiler and debuggers to associate errors with lines in your grammar specification file, rather than with the source file containing the tt(parse) function itself. it() loption(no-parse-member)nl() Do not write the file containing the parser's predefined parser member functions, even if that file doesn't yet exist. By default the file containing the parser's tt(parse) member function is (re)written each time bic() is called. Note that this option should normally be avoided, as this file contains parsing tables which are altered whenever the grammar definition is modified. it() loption(own-debug)nl() Extensively displays the actions performed by bic()'s parser when it processes the grammar specification file(s). This implies the tt(--verbose) option. it() loption(own-tokens) (soption(T))nl() The tokens returned as well as the text matched when bic() reads its input files(s) are shown when this option is used. This option does em(not) result in the generated parsing function displaying returned tokens and matched text. If that is what you want, use the tt(--print-tokens) option. it() lsoption(parsefun-skeleton)(P)(pathname)nl() tt(Pathname) defines the path name of the file containing the parsing member function's skeleton. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++.cc)). it() lsoption(parsefun-source)(p)(filename)nl() tt(Filename) defines the name of the source file to contain the parser member function tt(parse). Defaults to tt(parse.cc). it() lsoption(polymorphic-skeleton)(M)(pathame)nl() tt(Pathname) defines the path name of the file containing the skeleton of the polymorphic template classes. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++polymorphic)). it() lsoption(polymorphic-inline-skeleton)(m)(pathname)nl() tt(Pathname) defines the path name of the file containing the skeleton of the inline implementations of the members of the polymorphic template classes. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++polymorphic)). it() loption(print-tokens) (soption(t))nl() The generated parsing function implements a function tt(print__) displaying (on the standard output stream) the tokens returned by the parser's scanner as well as the corresponding matched text. This implementation is suppressed when the parsing function is generated without using this option. The member tt(print__)) is called from tt(Parser::print), which is defined in-line in the the parser's class header. Calling tt(Parser::print__) can thus easily be controlled from tt(print), using, e.g., a variable that set by the program using the parser generated by bic(). This option does em(not) show the tokens returned and text matched by bic() itself when it is reading its input file(s). If that is what you want, use the tt(--own-tokens) option. it() label(REQUIRED) laoption(required-tokens)(number)nl() Following a syntactic error, require at least tt(number) successfully processed tokens before another syntactic error can be reported. By default tt(number) is zero. it() label(SCANOPT) lsoption(scanner)(s)(pathname)nl() tt(Pathname) defines the path name to the file defining the scanner's class interface (e.g., tt("../scanner/scanner.h")). When this option is used the parser's member tt(int lex()) is predefined as verb( int Parser::lex() { return d_scanner.lex(); } ) and an object tt(Scanner d_scanner) is composed into the parser (but see also option tt(scanner-class-name)). The example shows the function that's called by default. When the tt(--flex) option (or tt(%flex) directive) is specified the function tt(d_scanner.yylex()) is called. Any other function to call can be specified using the tt(--scanner-token-function) option (or tt(%scanner-token-function) directive). By default bic() surrounds tt(pathname) by double quotes (using, e.g., tt(#include "pathname")). When tt(pathname) is surrounded by pointed brackets tt(#include ) is included. It is an error if this option is used and an already existing parser class header file does not include tt(`pathname'). it() loption(scanner-class-name) tt(scannerClassName) nl() Defines the name of the scanner class, declared by the tt(pathname) header file that is specified at the tt(scanner) option or directive. By default the class name tt(Scanner) is used. It is an error if this option is used and either the tt(scanner) option was not provided, or the parser class interface in an already existing parser class header file does not declare a scanner class tt(d_scanner) object. it() loption(scanner-debug)nl() Show de scanner's matched rules and returned tokens. This offers an extensive display of the rules and tokens matched and returned by bic()'s scanner, not of just the tokens and matched text received by bic(). If that is what you want use the tt(--own-tokens) option. it() laoption(scanner-matched-text-function)(function-call)nl() The scanner function returning the text that was matched at the last call of the scanner's token function. A complete function call expression should be provided (including a scanner object, if used). This option overrules the tt(d_scanner.matched()) call used by default when the tt(%scanner) directive is specified, and it overrules the tt(d_scanner.YYText()) call used when the tt(%flex) directive is provided. Example: verb( --scanner-matched-text-function "myScanner.matchedText()" ) it() laoption(scanner-token-function)(function-call)nl() The scanner function returning the next token, called from the parser's tt(lex) function. A complete function call expression should be provided (including a scanner object, if used). This option overrules the tt(d_scanner.lex()) call used by default when the tt(%scanner) directive is specified, and it overrules the tt(d_scanner.yylex()) call used when the tt(%flex) directive is provided. Example: verb( --scanner-token-function "myScanner.nextToken()" ) It is an error if this option is used and the scanner token function is not called from the code in an already existing implementation header. it() loption(show-filenames)nl() Writes the names of the generated files to the standard error stream. it() lsoption(skeleton-directory)(S)(directory)nl() Specifies the directory containing the skeleton files. This option can be overridden by the specific skeleton-specifying options (tt(-B -C, -H, -I, -M) and tt(-m)). it() laoption(target-directory)(pathname) nl() tt(Pathname) defines the directory where generated files should be written. By default this is the directory where bic() is called. it() loption(thread-safe)nl() No static data are modified, making bic() thread-safe. it() loption(usage)nl() Write basic usage information to the standard output stream and terminate. it() loption(verbose) (soption(V))nl() Write a file containing verbose descriptions of the parser states and what is done for each type of look-ahead token in that state. This file also describes all conflicts detected in the grammar, both those resolved by operator precedence and those that remain unresolved. It is not created by default, but if requested the information is written on tt(.output), where tt() is the grammar specification file passed to bic(). it() loption(version) (soption(v))nl() Display bic()'s version number and terminate. ) bisonc++-4.13.01/documentation/man/calculator/0000755000175000017500000000000012633316117020027 5ustar frankfrankbisonc++-4.13.01/documentation/man/calculator/scanner/0000755000175000017500000000000012633316117021460 5ustar frankfrankbisonc++-4.13.01/documentation/man/calculator/scanner/lexer0000644000175000017500000000040412633316117022520 0ustar frankfrank%interactive %filenames scanner %% [ \t]+ // skip white space \n return Parser::EOLN; [0-9]+ return Parser::NUMBER; . return matched()[0]; %% bisonc++-4.13.01/documentation/man/calculator/scanner/scanner.ih0000644000175000017500000000012112633316117023425 0ustar frankfrank#include "scanner.h" #include "../parser/parserbase.h" // end of scanner.ih bisonc++-4.13.01/documentation/man/calculator/scanner/scanner.h0000644000175000017500000000171412633316117023265 0ustar frankfrank// Generated by Flexc++ V0.98.00 on Wed, 06 Mar 2013 15:04:03 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { public: explicit Scanner(std::istream &in = std::cin, std::ostream &out = std::cout); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts }; // $insert scannerConstructors inline Scanner::Scanner(std::istream &in, std::ostream &out) : ScannerBase(in, out) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/man/calculator/build0000755000175000017500000000057612633316117021064 0ustar frankfrank#!/bin/bash case $1 in (clean) rm -rf scanner/lex.cc scanner/scannerbase.h calculator parser/parse.* ;; (demo) cd parser bisonc++ grammar cd ../scanner flexc++ lexer cd .. g++ --std=c++11 -Wall -o calculator *.cc */*.cc ;; (*) echo "$0 [clean|demo] to clean or build the demo program" ;; esac bisonc++-4.13.01/documentation/man/calculator/parser/0000755000175000017500000000000012633316117021323 5ustar frankfrankbisonc++-4.13.01/documentation/man/calculator/parser/parser.h0000644000175000017500000000152212633316117022770 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #undef Parser class Parser: public ParserBase { // $insert scannerobject Scanner d_scanner; public: int parse(); private: void error(char const *msg); int lex(); void print(); void prompt(); void done(); // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); }; inline void Parser::prompt() { std::cout << "? " << std::flush; } inline void Parser::done() { std::cout << "Done\n"; ACCEPT(); } #endif bisonc++-4.13.01/documentation/man/calculator/parser/grammar0000644000175000017500000000220212633316117022670 0ustar frankfrank%filenames parser %scanner ../scanner/scanner.h // lowest precedence %token NUMBER // integral numbers EOLN // newline %left '+' '-' %left '*' '/' %right UNARY // highest precedence %% expressions: expressions evaluate | prompt ; evaluate: alternative prompt ; prompt: { prompt(); } ; alternative: expression EOLN { cout << $1 << endl; } | 'q' done | EOLN | error EOLN ; done: { cout << "Done.\n"; ACCEPT(); } ; expression: expression '+' expression { $$ = $1 + $3; } | expression '-' expression { $$ = $1 - $3; } | expression '*' expression { $$ = $1 * $3; } | expression '/' expression { $$ = $1 / $3; } | '-' expression %prec UNARY { $$ = -$2; } | '+' expression %prec UNARY { $$ = $2; } | '(' expression ')' { $$ = $2; } | NUMBER { $$ = stoul(d_scanner.matched()); } ; bisonc++-4.13.01/documentation/man/calculator/parser/parserbase.h0000644000175000017500000000412012633316117023620 0ustar frankfrank// Generated by Bisonc++ V4.01.02 on Wed, 06 Mar 2013 15:26:21 +0100 #ifndef ParserBase_h_included #define ParserBase_h_included #include #include namespace // anonymous { struct PI__; } class ParserBase { public: // $insert tokens // Symbolic tokens: enum Tokens__ { NUMBER = 257, EOLN, UNARY, }; // $insert STYPE typedef int STYPE__; private: int d_stackIdx__; std::vector d_stateStack__; std::vector d_valueStack__; protected: enum Return__ { PARSE_ACCEPT__ = 0, // values used as parse()'s return values PARSE_ABORT__ = 1 }; enum ErrorRecovery__ { DEFAULT_RECOVERY_MODE__, UNEXPECTED_TOKEN__, }; bool d_debug__; size_t d_nErrors__; size_t d_requiredTokens__; size_t d_acceptedTokens__; int d_token__; int d_nextToken__; size_t d_state__; STYPE__ *d_vsp__; STYPE__ d_val__; STYPE__ d_nextVal__; ParserBase(); void ABORT() const; void ACCEPT() const; void ERROR() const; void clearin(); bool debug() const; void pop__(size_t count = 1); void push__(size_t nextState); void popToken__(); void pushToken__(int token); void reduce__(PI__ const &productionInfo); void errorVerbose__(); size_t top__() const; public: void setDebug(bool mode); }; inline bool ParserBase::debug() const { return d_debug__; } inline void ParserBase::setDebug(bool mode) { d_debug__ = mode; } inline void ParserBase::ABORT() const { throw PARSE_ABORT__; } inline void ParserBase::ACCEPT() const { throw PARSE_ACCEPT__; } inline void ParserBase::ERROR() const { throw UNEXPECTED_TOKEN__; } // As a convenience, when including ParserBase.h its symbols are available as // symbols in the class Parser, too. #define Parser ParserBase #endif bisonc++-4.13.01/documentation/man/calculator/parser/parser.ih0000644000175000017500000000131112633316117023135 0ustar frankfrank// Generated by Bisonc++ V4.01.02 on Wed, 06 Mar 2013 15:10:36 +0100 // Include this file in the sources of the class Parser. // $insert class.h #include "parser.h" inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::print() { print__(); // displays tokens if --print was specified } // Add here includes that are only required for the compilation // of Parser's sources. // UN-comment the next using-declaration if you want to use // int Parser's sources symbols from the namespace std without // specifying std:: using namespace std; bisonc++-4.13.01/documentation/man/calculator/README0000644000175000017500000000011212633316117020701 0ustar frankfrank Run `build clean' to cleanup, `build demo' to create the program. bisonc++-4.13.01/documentation/man/calculator/calculator.cc0000644000175000017500000000014112633316117022463 0ustar frankfrank#include "parser/parser.h" int main() { Parser calculator; return calculator.parse(); } bisonc++-4.13.01/documentation/man/bisonc++.yo0000644000175000017500000014172112633316117017660 0ustar frankfrankNOUSERMACRO(yyparse parse lex yylex error setDebug ParserBase throw ACCEPT ABORT errok RECOVERING print done prompt Parse yyrestart YYText debug setSval setLoc matched specification tag files member) includefile(../../release.yo) htmlstyle(body)(color: #27408B; background: #FFFAF0) whenhtml(mailto(Frank B. Brokken: f.b.brokken@rug.nl)) DEFINEMACRO(lsoption)(3)(\ bf(--ARG1)=tt(ARG3) (bf(-ARG2))\ ) DEFINEMACRO(laoption)(2)(\ bf(--ARG1)=tt(ARG2)\ ) DEFINEMACRO(loption)(1)(\ bf(--ARG1)\ ) DEFINEMACRO(soption)(1)(\ bf(-ARG1)\ ) DEFINEMACRO(itx)(0)() DEFINEMACRO(itemlist)(1)(ARG1) DEFINEMACRO(tr)(3)(\ row(cell(ARG1)cell()\ cell(ARG2)cell()\ cell(ARG3))) DEFINEMACRO(bic)(0)(bf(bisonc++)) DEFINEMACRO(Bic)(0)(bf(Bisonc++)) DEFINEMACRO(Cpp)(0)(bf(C++)) DEFINEMACRO(prot)(0)(tt((prot))) DELETEMACRO(tt) DEFINEMACRO(tt)(1)(em(ARG1)) COMMENT( man-request, section, date, distribution file, general name) manpage(bisonc++)(1)(_CurYrs_)(bisonc++._CurVers_.tar.gz) (bisonc++ parser generator) COMMENT( man-request, larger title ) manpagename(bisonc++)(Generate a C++ parser class and parsing function) COMMENT( all other: add after () ) manpagesynopsis() bf(bisonc++) [OPTIONS] tt(grammar-file) manpagesection(SECTIONS) This manual page contains the following sections: description( dit(1. DESCRIPTION) overview and short history of of bic(); dit(2. GENERATED FILES) files bic() may generate; dit(3. OPTIONS) Bic()'s command-line options; dit(4. DIRECTIVES) Bic()'s grammar-specification directives; dit(5. POLYMORPHIC SEMANTIC VALUES) How to use polymorphic semantic values in parsers generated by bic(); dit(6. PUBLIC MEMBERS AND -TYPES) Members and types that can be used by calling software; dit(7. PRIVATE ENUMS AND -TYPES) Enumerations and types only available to the tt(Parser) class; dit(8. PRIVATE MEMBER FUNCTIONS) Member functions that are only available to the tt(Parser) class; dit(9. PRIVATE DATA MEMBERS) Data members that are only available to the tt(Parser) class; dit(10. TYPES AND VARIABLES IN THE ANONYMOUS NAMESPACE) An overview of the types and variables that are used to define and store the grammar-tables generated by bic(); dit(11. RESTRICTIONS ON TOKEN NAMES) Name restrictions for user-defined symbols; dit(12. OBSOLETE SYMBOLS) Symbols available to bf(bison)(1), but not to bic(); dit(13. EXAMPLE) Guess what this is? dit(14. USING PARSER-CLASS SYMBOLS IN LEXICAL SCANNERS) How to refer to tt(Parser) tokens from within a lexical scanner; dit(15. FILES) (Skeleton) files used by bic(); dit(16. SEE ALSO) References to other programs and documentation; dit(17. BUGS) Some additional stuff that should not qualify as bugs. dit(18. ABOUT bisonc++) More history; dit(AUTHOR) At the end of this man-page. ) Looking for a specific section? Search for its number + a dot. manpagesection(1. DESCRIPTION) Bic() derives from previous work on bf(bison) by Alain Coetmeur (coetmeur@icdc.fr), who created in the early '90s a Cpp() class encapsulating the tt(yyparse) function as generated by the GNU-bf(bison) parser generator. Initial versions of bic() (up to version 0.92) wrapped Alain's program in a program offering a more modern user-interface, removing all old-style (bf(C)) tt(%define) directives from bf(bison++)'s input specification file (see below for an in-depth discussion of the differences between bf(bison++) and bic()). Starting with version 0.98, bic() represents a complete rebuilt of the parser generator, closely following descriptions given in Aho, Sethi and Ullman's em(Dragon Book). Since version 0.98 bic() is a Cpp() program, rather than a bf(C) program generating bf(C++) code. Bic() expands the concepts initially implemented in bf(bison) and bf(bison++), offering a cleaner setup of the generated parser class. The parser class is derived from a base-class, mainly containing the parser's token- and type-definitions as well as several member functions which should not be modified by the programmer. Most of these base-class members might also be defined directly in the parser class, but were defined in the parser's base-class. This design results in a very lean parser class, declaring only members that are actually defined by the programmer or that have to be defined by bic() itself (e.g., the member function tt(parse) as well as some support functions requiring access to facilities that are only available in the parser class itself, rather than in the parser's base class). This design does not require any virtual members: the members which are not involved in the actual parsing process may always be (re)implemented directly by the programmer. Thus there is no need to apply or define virtual member functions. In fact, there are only two public members in the parser class generated by bic(): tt(setDebug) (see below) and tt(parse). Remaining members are private, and those that can be redefined by the programmer using bic() usually receive initial, very simple default in-line implementations. The (partial) exception to this rule is the member function tt(lex), producing the next lexical token. For tt(lex) either a standardized interface or a mere declaration is offered (requiring the programmer to provide his/her own tt(lex) implementation). To enforce a primitive namespace, bf(bison) used a well-known naming-convention: all its public symbols started with tt(yy) or tt(YY). bf(Bison++) followed bf(bison) in this respect, even though a class by itself offers enough protection of its identifiers. Consequently, these tt(yy) and tt(YY) conventions are now outdated, and bic() does not generate or use symbols defined in either the parser (base) class or in its member functions starting with tt(yy) or tt(YY). Instead, following a suggestion by Lakos (2001), all data members start with tt(d_), and all static data members start with tt(s_). This convention was not introduced to enforce identifier protection, but to clarify the storage type of variables. Other (local) symbols lack specific prefixes. Furthermore, bic() allows its users to define the parser class in a particular namespace of their own choice. Bic() should be used as follows: itemization( it() As usual, a grammar must be defined. With bic() this is not different, and the reader is referred to bic()'s manual and other sources (like Aho, Sethi and Ullman's book) for details about how to specify and decorate grammars. it() The number and function of the various tt(%define) declarations as used by bf(bison++), however, is greatly modified. Actually, all of bf(bison)'s tt(%define) declarations were replaced by their (former) first arguments. Furthermore, `macro-style' declarations are no longer supported or required. Finally, all directives use lower-case characters only and do not contain underscore characters (but sometimes hyphens). E.g., tt(%define DEBUG) is now declared as tt(%debug); tt(%define LSP_NEEDED) is now declared as tt(%lsp-needed) (note the hyphen). it() As noted, no `macro style' tt(%define) declarations are required anymore. Instead, the normal practice of defining class members in source files and declaring them in a class header files can be adhered to using bic(). Basically, bic() concentrates on its main tasks: defining a parser class and implementing its parsing function tt(int parse), leaving all other parts of the parser class' definition to the programmer. it() Having specified the grammar and (usually) some directives bic() is able to generate files defining the parser class and to implement the member function tt(parse) and its support functions. See the next section for details about the various files that may be generated by bic(). it() All members (except for the member tt(parse) and its support functions) must be implemented by the programmer. Additional member functions should be declared in the parser class' header. At the very least the member tt(int lex()) em(must) be implemented (although a standard implementation can be generated by bic()). The member tt(lex) is called by tt(parse) to obtain the next available token. The member function tt(void error(char const *msg)) may also be re-implemented by the programmer, and a basic in-line implementation is provided by default. The member function tt(error) is called when tt(parse) detects (syntactic) errors. it() The parser can now be used in a program. A very simple example would be: verb( int main() { Parser parser; return parser.parse(); } ) ) manpagesection(2. GENERATED FILES) Bic() may create the following files: itemization( it() A file containing the implementation of the member function tt(parse) and its support functions. The member tt(parse) is a public member that can be called to parse a token-sequence according to a specified LALR1 type of grammar. By default the implementations of these members are written on the file tt(parse.cc). The programmer should not modify the contents of this file; it is rewritten every time bic() is called. it() A file containing an initial setup of the parser class, containing the declaration of the public member tt(parse) and of its (private) support members. New members may safely be declared in the parser class, as it is only created by bic() if not yet existing, using the filename tt(.h) (where tt() is the the name of the defined parser class). it() A file containing the parser class' em(base class). This base class should not be modified by the programmer. It contains types defined by bic(), as well as several (protected) data members and member functions, which should not be redefined by the programmer. All symbolic parser terminal tokens are defined in this class, thereby escalating these definitions to a separate class (cf. Lakos, (2001)), which in turn prevents circular dependencies between the lexical scanner and the parser (here, circular dependencies may easily be encountered, as the parser needs access to the lexical scanner class when defining the lexical scanner as one of its data members, whereas the lexical scanner needs access to the parser class to know about the grammar's symbolic terminal tokens; escalation is a way out of such circular dependencies). By default this file is (re)written any time bic() is called, using the filename tt(base.h). it() A file containing an em(implementation header). The implementation header rather than the parser's class header file should be included by the parser's source files implementing member functions declared by the programmer. The implementation header first includes the parser class's header file, and then provides default in-line implementations for its members tt(error) and tt(print) (which may be altered by the programmer). The member tt(lex) may also receive a standard in-line implementation. Alternatively, its implementation can be provided by the programmer (see below). Any directives and/or namespace directives required for the proper compilation of the parser's additional member functions should be declared next. The implementation header is included by the file defining tt(parse). By default the implementation header is created if not yet existing, receiving the filename tt(.ih). it() A verbose description of the generated parser. This file is comparable to the verbose ouput file originally generated by bf(bison++). It is generated when the option tt(--verbose) or tt(-V) is provided. If so, bic() writes the file tt(.output), where tt() is the name of the file containing the grammar definition. ) manpagesection(3. OPTIONS) includefile(../manual/invoking/options.yo) manpagesection(4. DIRECTIVES) The following directives can be specified in the initial section of the grammar specification file. When command-line options for directives exist, they overrule the corresponding directives given in the grammar specification file. Directives affecting the class header or implementation header file are ignored if these files already exist. Directives accepting a `filename' do not accept path names, i.e., they cannot contain directory separators (tt(/)); directives accepting a 'pathname' may contain directory separators. A 'pathname' using blank characters should be surrounded by double quotes. Some directives may generate errors. This happens when a directive conflicts with the contents of a file which bic() cannot modify (e.g., a parser class header file exists, but doesn't define a name space, but a tt(%namespace) directive was provided). To solve such errore the offending directive could be omitted, the existing file could be removed, or the existing file could be hand-edited according to the directive's specification. itemization( it() bf(%baseclass-header) tt(filename) nl() tt(Filename) defines the name of the file to contain the parser's base class. This class defines, e.g., the parser's symbolic tokens. Defaults to the name of the parser class plus the suffix tt(base.h). This directive is overruled by the bf(--baseclass-header) (bf(-b)) command-line option. It is an error if this directive is used and an already existing parser class header file does not contain tt(#include "filename"). it() bf(%baseclass-preinclude) tt(pathname)nl() tt(Pathname) defines the path to the file preincluded by the parser's base-class header. See the description of the tt(--baseclass-preinclude) option for details about this directive. By default, bic() surrounds tt(header) by double quotes. However, when tt(header) itself is surrounded by pointed brackets tt(#include
) is included. it() bf(%class-header) tt(filename) nl() tt(Filename) defines the name of the file to contain the parser class. Defaults to the name of the parser class plus the suffix tt(.h) This directive is overruled by the bf(--class-header) (bf(-c)) command-line option. It is an error if this directive is used and an already existing implementation header file does not contain tt(#include "filename"). it() bf(%class-name) tt(parser-class-name) nl() Declares the name of the parser class. It defines the name of the bf(C++) class that is generated. If no tt(%class-name) is specified the default class name tt(Parser) is used. It is an error if this directive is used and an already existing parser-class header file does not define tt(class `className') and/or if an already existing implementation header file does not define members of the class tt(`className'). it() bf(%debug) nl() Provide tt(parse) and its support functions with debugging code, showing the actual parsing process on the standard output stream. When included, the debugging output is active by default, but its activity may be controlled using the tt(setDebug(bool on-off)) member. No tt(#ifdef DEBUG) macros are used anymore. To remove existing debugging code re-run bic() without the tt(--debug) option or tt(%debug) declaration. it() bf(%error-verbose) nl() This directive can be specified to dump the parser's state stack to the standard output stream when the parser encounters a syntactic error. The stack dump shows on separate lines a stack index followed by the state stored at the indicated stack element. The first stack element is the stack's top element. it() bf(%expect) tt(number) nl() This directive specifies the exact number of shift/reduce and reduce/reduce conflicts for which no warnings are to be generated. Details of the conflicts are reported in the verbose output file (e.g., tt(grammar.output)). If the number of actually encountered conflicts deviates from `tt(number)', then this directive is ignored. it() bf(%filenames) tt(filename) nl() tt(Filename) is a generic filename that is used for all header files generated by bic(). Options defining specific filenames are also available (which then, in turn, overrule the name specified by this directive). This directive is overruled by the bf(--filenames) (bf(-f)) command-line option. it() bf(%flex) nl() When provided, the scanner member returning the matched text is called as tt(d_scanner.YYText()), and the scanner member returning the next lexical token is called as tt(d_scanner.yylex()). This directive is only interpreted if the tt(%scanner) directive is also provided. it() bf(%implementation-header) tt(filename) nl() tt(Filename) defines the name of the file to contain the implementation header. It defaults to the name of the generated parser class plus the suffix tt(.ih). nl() The implementation header should contain all directives and declarations that are em(only) used by the parser's member functions. It is the only header file that is included by the source file containing tt(parse)'s implementation. User defined implementation of other class members may use the same convention, thus concentrating all directives and declarations that are required for the compilation of other source files belonging to the parser class in one header file.nl() This directive is overruled by the bf(--implementation-header) (bf(-i)) command-line option. it() bf(%include) tt(pathname) nl() This directive is used to switch to tt(pathname) while processing a grammar specification. Unless tt(pathname) defines an absolute file-path, tt(pathname) is searched relative to the location of bic()'s main grammar specification file (i.e., the grammar file that was specified as bic()'s command-line option). This directive can be used to split long grammar specification files in shorter, meaningful units. After processing tt(pathname) processing continues beyond the tt(%include pathname) directive. it() bf(%left) tt(terminal ...) nl() Defines the names of symbolic terminal tokens that must be treated as left-associative. I.e., in case of a shift/reduce conflict, a reduction is preferred over a shift. Sequences of tt(%left, %nonassoc, %right) and tt(%token) directives may be used to define the precedence of operators. In expressions, the first used directive defines the tokens having the lowest precedence, the last used defines the tokens having the highest priority. See also tt(%token) below. it() bf(%locationstruct) tt(struct-definition) nl() Defines the organization of the location-struct data type tt(LTYPE__). This struct should be specified analogously to the way the parser's stacktype is defined using tt(%union) (see below). The location struct is named tt(LTYPE__). By default (if neither tt(locationstruct) nor tt(LTYPE__) is specified) the standard location struct (see the next directive) is used: it() bf(%lsp-needed) nl() This directive results in bic() generating a parser using the standard location stack. This stack's default type is: verb( struct LTYPE__ { int timestamp; int first_line; int first_column; int last_line; int last_column; char *text; }; ) Bic() does em(not) provide the elements of the tt(LTYPE__) struct with values. Action blocks of production rules may refer to the location stack element associated with a production element using tt(@) variables, like tt(@1.timestamp, @3.text, @5). The rule's location struct itself may be referred to as either tt(d_loc__) or tt(@@). it() bf(%ltype typename) nl() Specifies a user-defined token location type. If tt(%ltype) is used, tt(typename) should be the name of an alternate (predefined) type (e.g., tt(size_t)). It should not be used if a tt(%locationstruct) specification is defined (see below). Within the parser class, this type is available as the type `tt(LTYPE__)'. All text on the line following tt(%ltype) is used for the tt(typename) specification. It should therefore not contain comment or any other characters that are not part of the actual type definition. it() bf(%namespace) tt(namespace) nl() Define all of the code generated by bic() in the name space tt(namespace). By default no name space is defined. If this directive is used the implementation header is provided with a commented out tt(using namespace) declaration for the specified name space. In addition, the parser and parser base class header files also use the specified namespace to define their include guard directives. It is an error if this directive is used and an already existing parser-class header file and/or implementation header file does not define tt(namespace identifier). it() bf(%negative-dollar-indices) nl() Do not generate warnings when zero- or negative dollar-indices are used in the grammar's action blocks. Zero or negative dollar-indices are commonly used to implement inherited attributes, and should normally be avoided. When used, they can be specified like tt($-1), or like tt($-1), where tt(type) is empty; an tt(STYPE__) tag; or a field-name. However, note that in combination with the tt(%polymorphic) directive (see below) only the tt($-i) format can be used. it() bf(%no-lines) nl() By default tt(#line) preprocessor directives are inserted just before action statements in the file containing the parser's tt(parse) function. These directives are suppressed by the tt(%no-lines) directive. it() bf(%nonassoc) tt(terminal ...) nl() Defines the names of symbolic terminal tokens that should be treated as non-associative. I.e., in case of a shift/reduce conflict, a reduction is preferred over a shift. Sequences of tt(%left, %nonassoc, %right) and tt(%token) directives may be used to define the precedence of operators. In expressions, the first used directive defines the tokens having the lowest precedence, the last used defines the tokens having the highest priority. See also tt(%token) below. it() bf(%parsefun-source) tt(filename) nl() tt(Filename) defines the name of the file to contain the parser member function tt(parse). Defaults to tt(parse.cc). This directive is overruled by the bf(--parse-source) (bf(-p)) command-line option. it() bf(%polymorphic) tt(polymorphic-specification(s))nl() Bison's traditional way of handling multiple semantic values is to use a tt(%union) specification (see below). Although tt(%union) is supported by bic(), a polymorphic semantic value class is preferred due to its improved type safety. The tt(%polymorphic) directive defines a polymorphic semantic value class and can be used instead of a tt(%union) specification. Refer to section bf(POLYMORPHIC SEMANTIC VALUES) below or to bic()'s user manual for a detailed description of the specification, characteristics, and use of polymorphic semantic values. it() bf(%prec) tt(token) nl() Overrules the defined precendence of an operator for a particular grammatical rule. A well known application of tt(%prec) is: verb( expression: '-' expression %prec UMINUS { ... } ) Here, the default priority and precedence of the `tt(-)' token as the subtraction operator is overruled by the precedence and priority of the tt(UMINUS) token, which is commonly defined as verb( %right UMINUS ) (see below) following, e.g., the tt('*') and tt('/') operators. it() bf(%print-tokens) nl() The tt(print) directive provides an implementation of the Parser class's tt(print__) function displaying the current token value and the text matched by the lexical scanner as received by the generated tt(parse) function. it() bf(%required-tokens) tt(number)nl() Following a syntactic error, require at least tt(number) successfully processed tokens before another syntactic error can be reported. By default tt(number) is zero. it() bf(%right) tt(terminal ...) nl() Defines the names of symbolic terminal tokens that should be treated as right-associative. I.e., in case of a shift/reduce conflict, a shift is preferred over a reduction. Sequences of tt(%left, %nonassoc, %right) and tt(%token) directives may be used to define the precedence of operators. In expressions, the first used directive defines the tokens having the lowest precedence, the last used defines the tokens having the highest priority. See also tt(%token) below. it() bf(%scanner) tt(pathname)nl() Use tt(pathname) as the path name to the file pre-included in the parser's class header. See the description of the tt(--scanner) option for details about this directive. Similar to the convention adopted for this argument, tt(pathname) by default is surrounded by double quotes. However, when the argument is surrounded by pointed brackets tt(#include ) is included. This directive results in the definition of a composed tt(Scanner d_scanner) data member into the generated parser, and in the definition of a tt(int lex()) member, returning tt(d_scanner.lex()). By specifying the tt(%flex) directive the function tt(d_scanner.yylex()) is called. Any other function to call can be specified using the tt(--scanner-token-function) option (or tt(%scanner-token-function) directive). It is an error if this directive is used and an already existing parser class header file does not include tt(`pathname'). it() bf(%scanner-class-name) tt(scannerClassName) nl() Defines the name of the scanner class, declared by the tt(pathname) header file that is specified at the tt(scanner) option or directive. By default the class name tt(Scanner) is used. It is an error if this directive is used and either the tt(scanner) directive was not provided, or the parser class interface in an already existing parser class header file does not declare a scanner class tt(d_scanner) object. it() bf(%scanner-matched-text-function) tt(function-call) nl() The scanner function returning the text that was matched by the lexical scanner after its token function (see below) has returned. A complete function call expression should be provided (including a scanner object, if used). Example: verb( %scanner-matched-text-function myScanner.matchedText() ) By specifying the tt(%flex) directive the function tt(d_scanner.YYText()) is called. If the function call contains white space tt(scanner-token-function) should be surrounded by double quotes. it() bf(%scanner-token-function) tt(function-call) nl() The scanner function returning the next token, called from the generated parser's tt(lex) function. A complete function call expression should be provided (including a scanner object, if used). Example: verb( %scanner-token-function d_scanner.lex() ) If the function call contains white space tt(scanner-token-function) should be surrounded by double quotes. It is an error if this directive is used and the scanner token function is not called from the code in an already existing implementation header. it() bf(%start) tt(non-terminal) nl() The non-terminal tt(non-terminal) should be used as the grammar's start-symbol. If omitted, the first grammatical rule is used as the grammar's starting rule. All syntactically correct sentences must be derivable from this starting rule. it() bf(%stype typename) nl() The type of the semantic value of non-terminal tokens. By default it is tt(int). tt(%stype, %union,) and tt(%polymorphic) are mutually exclusive directives. Within the parser class, the semantic value type is available as the type `tt(STYPE__)'. All text on the line following tt(%stype) is used for the tt(typename) specification. It should therefore not contain comment or any other characters that are not part of the actual type definition. it() bf(%target-directory pathname) nl() tt(Pathname) defines the directory where generated files should be written. By default this is the directory where bic() is called. This directive is overruled by the tt(--target-directory) command-line option. it() bf(%token) tt(terminal ...) nl() Defines the names of symbolic terminal tokens. Sequences of tt(%left, %nonassoc, %right) and tt(%token) directives may be used to define the precedence of operators. In expressions, the first used directive defines the tokens having the lowest precedence, the last used defines the tokens having the highest priority. See also tt(%token) below.nl() bf(NOTE:) Symbolic tokens are defined as tt(enum)-values in the parser's base class. The names of symbolic tokens may not be equal to the names of the members and types defined by bic() itself (see the next sections). This requirement is em(not) enforced by bic(), but compilation errors may result if this requirement is violated. it() bf(%type) tt( non-terminal ...) nl() In combination with tt(%polymorphic) or tt(%union): associate the semantic value of a non-terminal symbol with a polymorphic semantic value tag or union field defined by these directives. it() bf(%union) tt(union-definition) nl() Acts identically to the identically named bf(bison) and bf(bison++) declaration. Bic() generates a union, named tt(STYPE__), as its semantic type. it() bf(%weak-tags) nl() This directive is ignored unless the tt(%polymorphic) directive was specified. It results in the declaration of tt(enum Tag__) rather than tt(enum class Tag__). When in doubt, don't use this directive. ) manpagesection(5. POLYMORPHIC SEMANTIC VALUES) The tt(%polymorphic) directive allows bic() to generate a parser using polymorphic semantic values. The various semantic values are specified as pairs, consisting of em(tags) (which are bf(C++) identifiers), and bf(C++) type names. Tags and type names are separated by colons. Multiple tag and type name combinations are separated by semicolons, and an optional semicolon ends the final tag/type specification. Here is an example, defining three semantic values: an tt(int), a tt(std::string) and a tt(std::vector): verb( %polymorphic INT: int; STRING: std::string; VECT: std::vector ) The identifier to the left of the colon is called the em(tag-identifier) (or simply em(tag)), and the type name to the right of the colon is called the em(type-name). Since bic() version 4.12.00 the types no longer have to offer offer default constructors, but if no default constructor is available then the option tt(--no-default-action-return) is required. When polymorphic type-names refer to types not yet declared by the parser's base class header, then these types must be declared in a header file whose location is specified through the tt(%baseclass-preinclude) directive. includefile(../manual/grammar/polymorphictype.yo) The tt(%polymorphic) directive adds the following definitions and declarations to the generated base class header and parser source file (if the tt(%namespace) directive was used then all declared/defined elements are placed inside the name space that is specified by the tt(%namespace) directive): itemization( it() An additional header is included in the parser's base class header: verb( #include ) it() All semantic value type identifiers are collected in a strongly typed `tt(Tag__)' enumeration. E.g., verb( enum class Tag__ { INT, STRING, VECT }; ) it() The name space tt(Meta__) contains almost all of the code implementing polymorphic values. ) The name space tt(Meta__) contains the following elements: itemization( it() A polymorphic base class tt(Base). This class is normally not explicitly referred to by user-defined code. Refer to by bic()'s user manual for a detailed description of this class. it() For each of the tag-identifiers specified with the tt(%polymorphic) directive a class template tt(Semantic) is defined, containing a data element of the type-name matching the tt(Tag__) for which tt(Semantic) was derived. The tt(Semantic) classes are normally not explicitly referred to by user-defined code. Refer to by bic()'s user manual for a detailed description of these classes. it() A class tt(SType), derived from tt(std::shared_ptr). This class becomes the parser's semantic value type, offering the following members:nl() tt(Constructors:) default, copy and move constructors;nl() tt(Assignment operators:) copy and move assignment operators declaring tt(SType) or any of the tt(%polymorphic) type-names as their right-hand side operands;nl() tt(Tag__ tag() const), returning tt(Semantic)'s tt(Tag__) value;nl() tt(DataType &get()) returns a reference to the semantic value stored inside tt(Semantic). This member checks for 0-pointers and for tt(Tag__) mismatches between the requested and actual tt(Tag__), in that case replacing the current tt(Semantic) object pointed to by a new tt(Semantic) object of the requested tt(Tag__). tt(DataType &data()) returns a reference to the semantic value stored inside tt(Semantic). This is a (partially) em(unchecking) variant of the corresponing tt(get) member, resulting in a em(Segfault) if used when the tt(shared_ptr) holds a 0-pointer, compilation may fail in case of a mismatch between the requested and actual tt(Tag__). ) Since bic() declares tt(typedef Meta__::SType STYPE__), polymorphic semantic values can be used without referring to the name space tt(Meta__). manpagesection(6. PUBLIC MEMBERS AND -TYPES) includefile(../manual/class/public.yo) manpagesection(7. PRIVATE ENUMS AND -TYPES) includefile(../manual/class/privenum.yo) manpagesection(8. PRIVATE MEMBER FUNCTIONS) includefile(../manual/class/privmembers.yo) manpagesection(9. PRIVATE DATA MEMBERS) The following data members can be used by members of parser classes generated by bic(). All data members are actually protected members inherited from the parser's base class. itemization( it() bf(size_t d_acceptedTokens__):nl() Counts the number of accepted tokens since the start of the tt(parse()) function or since the last detected syntactic error. It is initialized to tt(d_requiredTokens__) to allow an early error to be detected as well. it() bf(bool d_debug__):nl() When the tt(debug) option has been specified, this variable (tt(true) by default) determines whether debug information is actually displayed. it() bf(LTYPE__ d_loc__):nl() The location type value associated with a terminal token. It can be used by, e.g., lexical scanners to pass location information of a matched token to the parser in parallel with a returned token. It is available only when tt(%lsp-needed, %ltype) or tt(%locationstruct) has been defined. nl() Lexical scanners may be offered the facility to assign a value to this variable in parallel with a returned token. In order to allow a scanner access to tt(d_loc__), tt(d_loc__)'s address should be passed to the scanner. This can be realized, for example, by defining a member tt(void setLoc(STYPE__ *)) in the lexical scanner, which is then called from the parser's constructor as follows: verb( d_scanner.setSLoc(&d_loc__); ) Subsequently, the lexical scanner may assign a value to the parser's tt(d_loc__) variable through the pointer to tt(d_loc__) stored inside the lexical scanner. it() bf(LTYPE__ d_lsp__):nl() The location stack pointer. Do not modify. it() bf(size_t d_nErrors__):nl() The number of errors counted by tt(parse). It is initialized by the parser's base class initializer, and is updated while tt(parse) executes. When tt(parse) has returned it contains the total number of errors counted by tt(parse). Errors are not counted if suppressed (i.e., if tt(d_acceptedTokens__) is less than tt(d_requiredTokens__)). it() bf(size_t d_nextToken__):nl() A pending token. Do not modify. it() bf(size_t d_requiredTokens__):nl() Defines the minimum number of accepted tokens that the tt(parse) function must have processed before a syntactic error can be generated. it() bf(int d_state__):nl() The current parsing state. Do not modify. it() bf(int d_token__):nl() The current token. Do not modify. it() bf(STYPE__ d_val__):nl() The semantic value of a returned token or non-terminal symbol. With non-terminal tokens it is assigned a value through the action rule's symbol tt($$). Lexical scanners may be offered the facility to assign a semantic value to this variable in parallel with a returned token. In order to allow a scanner access to tt(d_val__), tt(d_val__)'s address should be passed to the scanner. This can be realized, for example, by passing tt(d_val__)'s address to the lexical scanner's constructor. Subsequently, the lexical scanner may assign a value to the parser's tt(d_val__) variable through the pointer to tt(d_val__) stored in a data member of the lexical scanner. Note that in some cases this approach em(must) be used to make available the correct semantic value to the parser. In particular, when a grammar state defines multiple reductions, depending on the next token, the reduction's action only takes place following the retrieval of the next token, thus losing the initially matched token text. If tt(STYPE) is a polymorphic semantic value, direct assignment of values to tt(d_val__) is not possible. In that case em(tagged assignment) must be used+IFDEF(MANUAL)(, as explained in section ref(POLYTYPE))(). it() bf(LTYPE__ d_vsp__):nl() The semantic value stack pointer. Do not modify. ) manpagesection(10. TYPES AND VARIABLES IN THE ANONYMOUS NAMESPACE) includefile(../manual/class/anonymous.yo) manpagesection(11. RESTRICTIONS ON TOKEN NAMES) To avoid collisions with names defined by the parser's (base) class, the following identifiers should not be used as token names: itemization( it() Identifiers ending in two underscores; it() Any of the following identifiers: tt(ABORT, ACCEPT, ERROR, clearin, debug), or tt(setDebug). ) manpagesection(12. OBSOLETE SYMBOLS) All bf(DECLARATIONS) and bf(DEFINE) symbols not listed above but defined in bf(bison++) are obsolete with bic(). In particular, there is no tt(%header{ ... %}) section anymore. Also, all bf(DEFINE) symbols related to member functions are now obsolete. There is no need for these symbols anymore as they can simply be declared in the class header file and defined elsewhere. manpagesection(13. EXAMPLE) Using a fairly worn-out example, we'll construct a simple calculator below. The basic operators as well as parentheses can be used to specify expressions, and each expression should be terminated by a newline. The program terminates when a tt(q) is entered. Empty lines result in a mere prompt. First an associated grammar is constructed. When a syntactic error is encountered all tokens are skipped until then next newline and a simple message is printed using the default tt(error) function. It is assumed that no semantic errors occur (in particular, no divisions by zero). The grammar is decorated with actions performed when the corresponding grammatical production rule is recognized. The grammar itself is rather standard and straightforward, but note the first part of the specification file, containing various other directives, among which the tt(%scanner) directive, resulting in a composed tt(d_scanner) object as well as an implementation of the member function tt(int lex). In this example, a common tt(Scanner) class construction strategy was used: the class tt(Scanner) was derived from the class tt(yyFlexLexer) generated by bf(flex++)(1). The actual process of constructing a class using bf(flex++)(1) is beyond the scope of this man-page, but bf(flex++)(1)'s specification file is mentioned below, to further complete the example. Here is bf(bisonc++)'s input file: verbinclude(calculator/parser/grammar) Next, bic() processes this file. In the process, bic() generates the following files from its skeletons: itemization( it() The parser's base class, which should not be modified by the programmer: verbinclude(calculator/parser/parserbase.h) it() The parser class tt(parser.h) itself. In the grammar specification various member functions are used (e.g., tt(done)) and tt(prompt). These functions are so small that they can very well be implemented inline. Note that tt(done) calls tt(ACCEPT) to terminate further parsing. tt(ACCEPT) and related members (e.g., tt(ABORT)) can be called from any member called by tt(parse). As a consequence, action blocks could contain mere function calls, rather than several statements, thus minimizing the need to rerun bic() when an action is modified. Once bic() had created tt(parser.h) it was augmented with the required additional members, resulting in the following final version: verbinclude(calculator/parser/parser.h) it() To complete the example, the following lexical scanner specification was used: verbinclude(calculator/scanner/lexer) it() Since no member functions other than tt(parse) were defined in separate source files, only tt(parse) includes tt(parser.ih). Since tt(cerr) is used in the grammar's actions, a tt(using namespace std) or comparable statement is required. This was effectuated from tt(parser.ih) Here is the implementation header declaring the standard namespace: verbinclude(calculator/parser/parser.ih) The implementation of the parsing member function tt(parse) is basically irrelevant, since it should not be modified by the programmer. It was written on the file tt(parse.cc). it() Finally, here is the program offering our simple calculator: verbinclude(calculator/calculator.cc) ) manpagesection(14. USING PARSER-CLASS SYMBOLS IN LEXICAL SCANNERS) Note here that although the file tt(parserbase.h), defining the parser class' base-class, rather than the header file tt(parser.h) defining the parser class is included, the lexical scanner may simply return tokens of the class tt(Parser) (e.g., tt(Parser::NUMBER) rather than tt(ParserBase::NUMBER)). In fact, using a simple tt(#define - #undef) pair generated by the bic() respectively at the end of the base class header the file and just before the definition of the parser class itself it is the possible to assume in the lexical scanner that all symbols defined in the the parser's base class are actually defined in the parser class itself. It the should be noted that this feature can only be used to access base class the tt(enum) and types. The actual parser class is not available by the time the the lexical scanner is defined, thus avoiding circular class dependencies. manpagesection(15. FILES) itemization( it() bf(bisonc++base.h): skeleton of the parser's base class; it() bf(bisonc++.h): skeleton of the parser class; it() bf(bisonc++.ih): skeleton of the implementation header; it() bf(bisonc++.cc): skeleton of the member tt(parse); it() bf(bisonc++polymorphic): skeleton of the declarations used by tt(%polymorphic); it() bf(bisonc++polymorphic.inline): skeleton of the inline implementations of the members declared in bf(bisonc++polymorphic). ) manpagesection(16. SEE ALSO) bf(bison)(1), bf(bison++)(1), bf(bison.info) (using texinfo), bf(flex++)(1) Lakos, J. (2001) bf(Large Scale C++ Software Design), Addison Wesley.nl() Aho, A.V., Sethi, R., Ullman, J.D. (1986) bf(Compilers), Addison Wesley. manpagesection(17. BUGS) Parser-class header files (e.g., Parser.h) and parser-class internal header files (e.g., Parser.ih) generated with bisonc++ < 4.02.00 require two hand-modifications when used in combination with bisonc++ >= 4.02.00. See the description of tt(exceptionHandler__) for details. Discontinued options: itemization( it() loption(include-only) it() loption(namespace) ) To avoid collisions with names defined by the parser's (base) class, the following identifiers should not be used as token nams: itemization( it() Identifiers ending in two underscores; it() Any of the following identifiers: tt(ABORT, ACCEPT, ERROR, clearin, debug, error), or tt(setDebug). ) When re-using files generated by bic() before version 2.0.0, minor hand-modification might be necessary. The identifiers in the following list (defined in the parser's base class) now have two underscores affixed to them: tt(LTYPE, STYPE) and tt(Tokens). When using classes derived from the generated parser class, the following identifiers are available in such derived classes: tt(DEFAULT_RECOVERY_MODE, ErrorRecovery, Return, UNEXPECTED_TOKEN, d_debug, d_loc, d_lsp, d_nErrors, d_nextToken, d_state, d_token, d_val), and tt(d_vsp). When used in derived classes, they too need two underscores affixed to them. The member function tt(void lookup) (< 1.00) was replaced by tt(int lookup). When regenerating parsers created by early versions of bf(bisonc++) (versions before version 1.00), tt(lookup)'s prototype should be corrected by hand, since bf(bisonc++) will not by itself rewrite the parser class's header file. The em(Semantic) parser, mentioned in bf(bison++)(1) is not implemented in bf(bisonc++)(1). According to bf(bison++)(1) the semantic parser was not available in bf(bison++) either. It is possible that the em(Pure) parser is now available through the tt(--thread-safe) option. manpagesection(18. ABOUT bisonc++) bf(Bisonc++) was based on bf(bison++), originally developed by Alain Coetmeur (coetmeur@icdc.fr), R&D department (RDT), Informatique-CDC, France, who based his work on bf(bison), GNU version 1.21. Bic() version 0.98 and beyond is a complete rewrite of an LALR-1 parser generator, closely following the construction process as described in Aho, Sethi and Ullman's (1986) book bf(Compilers) (i.e., the em(Dragon book)). It uses the same grammar specification as bf(bison) and bf(bison++), and it uses practically the same options and directives as bic() versions earlier than 0.98. Variables, declarations and macros that are obsolete were removed. manpageauthor() Frank B. Brokken (f.b.brokken@rug.nl). bisonc++-4.13.01/documentation/examples/0000755000175000017500000000000012633316117016741 5ustar frankfrankbisonc++-4.13.01/documentation/examples/bison++Example.NEW/0000755000175000017500000000000012633316117022145 5ustar frankfrankbisonc++-4.13.01/documentation/examples/bison++Example.NEW/MyCompiler.cc0000644000175000017500000000033512633316117024535 0ustar frankfrank#include "MyParser.h" #include int main(int argc,char **argv) { MyParser aCompiler; int result = aCompiler.parse(); printf("Parsing result = %s\n", result ? "Error" : "OK"); return 0; }; bisonc++-4.13.01/documentation/examples/bison++Example.NEW/MyParser.ih0000644000175000017500000000056412633316117024236 0ustar frankfrank // Include this file in the sources of the class MyParser. // $insert class.h #include "MyParser.h" // Add below here any includes etc. that are only // required for the compilation of MyParser's sources. // UN-comment the next using-declaration if you want to use // symbols from the namespace std without specifying std:: using namespace std; bisonc++-4.13.01/documentation/examples/bison++Example.NEW/MyParser.h0000644000175000017500000000142712633316117024064 0ustar frankfrank#ifndef MyParser_h_included #define MyParser_h_included // for error()'s inline implementation #include // $insert baseclass #include "MyParserbase.h" // $insert scanner.h #include "MyScanner.h" #undef MyParser class MyParser: public MyParserBase { // $insert scannerobject MyScanner d_scanner; public: int parse(); private: void error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex int lex() { return d_scanner.yylex(); } void print() // d_token, d_loc {} // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(); void nextToken(); }; #endif bisonc++-4.13.01/documentation/examples/bison++Example.NEW/MyParser.y0000644000175000017500000000161212633316117024101 0ustar frankfrank%class-name MyParser %scanner MyScanner.h %lsp-needed %lines %union { int num; bool statement; } %token INTEGER %token BOOLEAN %type exp result integer %type bexp boolean %start result %left OR %left AND %left PLUS MINUS %left NOT %left LPARA RPARA %% result: exp {cout << "Result = " << $1 << endl;} | bexp {cout << "Result = " << $1 << endl;} exp : exp PLUS exp {$$ = $1 + $3;} | integer {$$ = $1;} | MINUS exp { $$ = -$2;} | exp MINUS exp {$$ = $1 - $3;} bexp : boolean {$$ = $1;} | bexp AND bexp { $$ = $1 && $3;} | bexp OR bexp { $$ = $1 || $3;} | NOT bexp {$$ = !$2;} | LPARA bexp RPARA {$$ = $2;} integer: INTEGER { $$ = atoi(d_scanner.YYText()); } boolean: BOOLEAN { $$ = string("TRUE") == d_scanner.YYText(); } bisonc++-4.13.01/documentation/examples/bison++Example.NEW/MyScanner.h0000644000175000017500000000042212633316117024213 0ustar frankfrank#ifndef _INCLUDED_MYSCANNER_H_ #define _INCLUDED_MYSCANNER_H_ #if ! defined(_SKIP_YYFLEXLEXER_) && ! defined(_SYSINC_FLEXLEXER_H_) #include #define _SYSINC_FLEXLEXER_H_ #endif class MyScanner: public yyFlexLexer { public: int yylex(); }; #endif bisonc++-4.13.01/documentation/examples/bison++Example.NEW/README0000644000175000017500000000151312633316117023025 0ustar frankfrankThis is the same example as given in ../bison++Example.ORG, but this time it's adapted to bisonc++ In the original example there's a `FLEXFIX' allowing the lexer access to the semantic value defined in the parser. I don't think the scanner should be required to know about what the parser does. What if we really would like to let the scanner communicate its value to the parser ? Then derive a class from yyFlexLexer, and let that object's constructor know about the Parser's lval datamember, by defining a constructor that is informed about that data member (e.g., by passing &d_lval in its constructor) In the example I've modified the lexical scanner's setup and I modified the grammar's actions slightly so that now the parser handles the lexer's text, rather than the lexer assuming that the parser will do something with its text. bisonc++-4.13.01/documentation/examples/bison++Example.NEW/FlexLexer.h0000644000175000017500000001313312633316117024215 0ustar frankfrank// $Header$ // FlexLexer.h -- define interfaces for lexical analyzer classes generated // by flex // Copyright (c) 1993 The Regents of the University of California. // All rights reserved. // // This code is derived from software contributed to Berkeley by // Kent Williams and Tom Epperly. // // Redistribution and use in source and binary forms with or without // modification are permitted provided that: (1) source distributions retain // this entire copyright notice and comment, and (2) distributions including // binaries display the following acknowledgement: ``This product includes // software developed by the University of California, Berkeley and its // contributors'' in the documentation or other materials provided with the // distribution and in all advertising materials mentioning features or use // of this software. Neither the name of the University nor the names of // its contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF // MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. // This file defines FlexLexer, an abstract class which specifies the // external interface provided to flex C++ lexer objects, and yyFlexLexer, // which defines a particular lexer class. // // If you want to create multiple lexer classes, you use the -P flag // to rename each yyFlexLexer to some other xxFlexLexer. You then // include in your other sources once per lexer class: // // #undef yyFlexLexer // #define yyFlexLexer xxFlexLexer // #include // // #undef yyFlexLexer // #define yyFlexLexer zzFlexLexer // #include // ... #ifndef __FLEX_LEXER_H // Never included before - need to define base class. #define __FLEX_LEXER_H #include #include "MyParser.h" extern "C++" { struct yy_buffer_state; typedef int yy_state_type; class FlexLexer { public: virtual ~FlexLexer() { } const char* YYText() { return yytext; } int YYLeng() { return yyleng; } virtual void yy_switch_to_buffer( struct yy_buffer_state* new_buffer ) = 0; virtual struct yy_buffer_state* yy_create_buffer( istream* s, int size ) = 0; virtual void yy_delete_buffer( struct yy_buffer_state* b ) = 0; virtual void yyrestart( istream* s ) = 0; virtual int yylex(FLEXFIX) = 0; // Call yylex with new input/output sources. int yylex(FLEXFIX, istream* new_in, ostream* new_out = 0 ) { switch_streams( new_in, new_out ); return yylex(FLEXFIX2); } // Switch to new input/output streams. A nil stream pointer // indicates "keep the current one". virtual void switch_streams( istream* new_in = 0, ostream* new_out = 0 ) = 0; int lineno() const { return yylineno; } int debug() const { return yy_flex_debug; } void set_debug( int flag ) { yy_flex_debug = flag; } protected: char* yytext; int yyleng; int yylineno; // only maintained if you use %option yylineno int yy_flex_debug; // only has effect with -d or "%option debug" }; } #endif #if defined(yyFlexLexer) || ! defined(yyFlexLexerOnce) // Either this is the first time through (yyFlexLexerOnce not defined), // or this is a repeated include to define a different flavor of // yyFlexLexer, as discussed in the flex man page. #define yyFlexLexerOnce class yyFlexLexer : public FlexLexer { public: // arg_yyin and arg_yyout default to the cin and cout, but we // only make that assignment when initializing in yylex(). yyFlexLexer( istream* arg_yyin = 0, ostream* arg_yyout = 0 ); virtual ~yyFlexLexer(); void yy_switch_to_buffer( struct yy_buffer_state* new_buffer ); struct yy_buffer_state* yy_create_buffer( istream* s, int size ); void yy_delete_buffer( struct yy_buffer_state* b ); void yyrestart( istream* s ); virtual int yylex(FLEXFIX); virtual void switch_streams( istream* new_in, ostream* new_out ); protected: virtual int LexerInput( char* buf, int max_size ); virtual void LexerOutput( const char* buf, int size ); virtual void LexerError( const char* msg ); void yyunput( int c, char* buf_ptr ); int yyinput(); void yy_load_buffer_state(); void yy_init_buffer( struct yy_buffer_state* b, istream* s ); void yy_flush_buffer( struct yy_buffer_state* b ); int yy_start_stack_ptr; int yy_start_stack_depth; int* yy_start_stack; void yy_push_state( int new_state ); void yy_pop_state(); int yy_top_state(); yy_state_type yy_get_previous_state(); yy_state_type yy_try_NUL_trans( yy_state_type current_state ); int yy_get_next_buffer(); istream* yyin; // input source for default LexerInput ostream* yyout; // output sink for default LexerOutput struct yy_buffer_state* yy_current_buffer; // yy_hold_char holds the character lost when yytext is formed. char yy_hold_char; // Number of characters read into yy_ch_buf. int yy_n_chars; // Points to current character in buffer. char* yy_c_buf_p; int yy_init; // whether we need to initialize int yy_start; // start state number // Flag which is used to allow yywrap()'s to do buffer switches // instead of setting up a fresh yyin. A bit of a hack ... int yy_did_buffer_switch_on_eof; // The following are not always needed, but may be depending // on use of certain flex features (like REJECT or yymore()). yy_state_type yy_last_accepting_state; char* yy_last_accepting_cpos; yy_state_type* yy_state_buf; yy_state_type* yy_state_ptr; char* yy_full_match; int* yy_full_state; int yy_full_lp; int yy_lp; int yy_looking_for_trail_begin; int yy_more_flag; int yy_more_len; int yy_more_offset; int yy_prev_more_offset; }; #endif bisonc++-4.13.01/documentation/examples/bison++Example.NEW/MyScanner.l0000644000175000017500000000130112633316117024214 0ustar frankfrank%{ // see README for modifications I made to the scanner and grammar. #define _SKIP_YYFLEXLEXER_ #include "MyScanner.h" #include "MyParserbase.h" %} %option yyclass="MyScanner" outfile="MyScanner.cc" %option c++ 8bit warn noyywrap yylineno digit [0-9] integer [1-9]{digit}* ws [ \t\n]+ %% {ws} { /* no action */ } {integer} {return MyParser::INTEGER; } "AND" {return(MyParser::AND);} "OR" {return(MyParser::OR);} "NOT" {return(MyParser::NOT);} "TRUE" {return MyParser::BOOLEAN; } "FALSE" {return MyParser::BOOLEAN; } "-" {return(MyParser::MINUS);} "+" {return(MyParser::PLUS);} "(" {return(MyParser::LPARA);} ")" {return(MyParser::RPARA);} bisonc++-4.13.01/documentation/examples/bison++Example.NEW/make0000755000175000017500000000026212633316117023010 0ustar frankfrank#!/bin/bash if [ "$1" == "clean" ] ; then rm -f parse.cc MyParserbase.h a.out else bisonc++ -V MyParser.y || exit 1 flex -o MyScanner.cc MyScanner.l g++ -Wall *.cc fi bisonc++-4.13.01/documentation/examples/bison++Example.NEW/Makefile0000644000175000017500000000113312633316117023603 0ustar frankfrank .SUFFIXES : .cc .y .l $(SUFFIXES) .cc.o : g++ -g -I . -I$(CENTERCCLIBDIR)/incl -c $*.cc .y.cc : bison++ -d -o $*.cc -h $*.h $*.y .l.cc : flex++ -o$*.cc $*.l .y.h : bison++ -d -o $*.cc -h $*.h $*.y .l.h : flex++ -o$*.cc $*.l # COMPILER SAMPLE MyCompiler.o : MyCompiler.cc MyParser.h MyScanner.h MyParser.o : MyParser.cc MyParser.h MyScanner.o : MyScanner.cc MyScanner.h MyParser.h MyParser.cc : MyParser.y MyScanner.cc : MyScanner.l MyParser.h : MyParser.y MyScanner.h : MyScanner.l compiler : MyCompiler.o MyParser.o MyScanner.o g++ -o $@ MyCompiler.o MyParser.o MyScanner.o bisonc++-4.13.01/documentation/examples/bison++Example.NEW/test.txt0000644000175000017500000000001012633316117023654 0ustar frankfrank1+2+3+4 bisonc++-4.13.01/documentation/examples/bison++Example.NEW/test2.txt0000644000175000017500000000003112633316117023741 0ustar frankfrank(TRUE OR FALSE) AND FALSEbisonc++-4.13.01/documentation/examples/README0000644000175000017500000000201112633316117017613 0ustar frankfrankMost examples available before version 1.00 are now in the ../regression directory, where they are most easily used by runnning the `run' script. This directory is kept for historical reasons only. bison++Example.NEW: The original example provided by Alain Coetmeur. compile by running `make' The program can be run by having it read test.txt or test2.txt from its stdin. It has no other purpose (as far as I can see). bison++Example.ORG/ The original (unmodified) example provided by Alain Coetmeur. Included as a collector's item only. Doesn't work as-is. input: removed. The example serves no additional purpose. calcgen: removed. The example serves no additional purpose. annotations: moved to `regression' calculator: moved to `regression', also available in `man/calculator' conflict: moved to `regression' error: moved to `regression' location: moved to `regression' simpledemo: moved to `regression' as the example `simplecalc' bisonc++-4.13.01/documentation/examples/bison++Example.ORG/0000755000175000017500000000000012633316117022143 5ustar frankfrankbisonc++-4.13.01/documentation/examples/bison++Example.ORG/MyCompiler.cc0000644000175000017500000000116012633316117024530 0ustar frankfrank#include "MyParser.h" #define YY_DECL int yyFlexLexer::yylex(YY_MyParser_STYPE *val) #include "FlexLexer.h" #include class MyCompiler : public MyParser { private: yyFlexLexer theScanner; public: virtual int yylex(); virtual void yyerror(char *m); MyCompiler(){;} }; int MyCompiler::yylex() { return theScanner.yylex(&yylval); } void MyCompiler::yyerror(char *m) { fprintf(stderr,"%d: %s at token '%s'\n",yylloc.first_line, m,yylloc.text); } int main(int argc,char **argv) { MyCompiler aCompiler; int result=aCompiler.yyparse(); printf("Resultat Parsing=%s\n",result?"Erreur":"OK"); return 0; }; bisonc++-4.13.01/documentation/examples/bison++Example.ORG/MyParser.y0000644000175000017500000000216512633316117024103 0ustar frankfrank%{ #define YY_MyParser_STYPE yy_MyParser_stype %} %name MyParser %define LSP_NEEDED %define ERROR_BODY =0 %define LEX_BODY =0 %header{ #include #include using namespace std; #define YY_DECL int yyFlexLexer::yylex(YY_MyParser_STYPE *val) #ifndef FLEXFIX #define FLEXFIX YY_MyParser_STYPE *val #define FLEXFIX2 val #endif %} %union { int num; bool statement; } %token PLUS INTEGER MINUS AND OR NOT LPARA RPARA %token BOOLEAN %type exp result %type bexp %start result %left OR %left AND %left PLUS MINUS %left NOT %left LPARA RPARA %% result : exp {cout << "Result = " << $1 << endl;} | bexp {cout << "Result = " << $1 << endl;} exp : exp PLUS exp {$$ = $1 + $3;} | INTEGER {$$ = $1;} | MINUS exp { $$ = -$2;} | exp MINUS exp {$$ = $1 - $3;} bexp : BOOLEAN {$$ = $1;} | bexp AND bexp { $$ = $1 && $3;} | bexp OR bexp { $$ = $1 || $3;} | NOT bexp {$$ = !$2;} | LPARA bexp RPARA {$$ = $2} %% /* -------------- body section -------------- */ // feel free to add your own C/C++ code here bisonc++-4.13.01/documentation/examples/bison++Example.ORG/FlexLexer.h0000644000175000017500000001313312633316117024213 0ustar frankfrank// $Header$ // FlexLexer.h -- define interfaces for lexical analyzer classes generated // by flex // Copyright (c) 1993 The Regents of the University of California. // All rights reserved. // // This code is derived from software contributed to Berkeley by // Kent Williams and Tom Epperly. // // Redistribution and use in source and binary forms with or without // modification are permitted provided that: (1) source distributions retain // this entire copyright notice and comment, and (2) distributions including // binaries display the following acknowledgement: ``This product includes // software developed by the University of California, Berkeley and its // contributors'' in the documentation or other materials provided with the // distribution and in all advertising materials mentioning features or use // of this software. Neither the name of the University nor the names of // its contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF // MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. // This file defines FlexLexer, an abstract class which specifies the // external interface provided to flex C++ lexer objects, and yyFlexLexer, // which defines a particular lexer class. // // If you want to create multiple lexer classes, you use the -P flag // to rename each yyFlexLexer to some other xxFlexLexer. You then // include in your other sources once per lexer class: // // #undef yyFlexLexer // #define yyFlexLexer xxFlexLexer // #include // // #undef yyFlexLexer // #define yyFlexLexer zzFlexLexer // #include // ... #ifndef __FLEX_LEXER_H // Never included before - need to define base class. #define __FLEX_LEXER_H #include #include "MyParser.h" extern "C++" { struct yy_buffer_state; typedef int yy_state_type; class FlexLexer { public: virtual ~FlexLexer() { } const char* YYText() { return yytext; } int YYLeng() { return yyleng; } virtual void yy_switch_to_buffer( struct yy_buffer_state* new_buffer ) = 0; virtual struct yy_buffer_state* yy_create_buffer( istream* s, int size ) = 0; virtual void yy_delete_buffer( struct yy_buffer_state* b ) = 0; virtual void yyrestart( istream* s ) = 0; virtual int yylex(FLEXFIX) = 0; // Call yylex with new input/output sources. int yylex(FLEXFIX, istream* new_in, ostream* new_out = 0 ) { switch_streams( new_in, new_out ); return yylex(FLEXFIX2); } // Switch to new input/output streams. A nil stream pointer // indicates "keep the current one". virtual void switch_streams( istream* new_in = 0, ostream* new_out = 0 ) = 0; int lineno() const { return yylineno; } int debug() const { return yy_flex_debug; } void set_debug( int flag ) { yy_flex_debug = flag; } protected: char* yytext; int yyleng; int yylineno; // only maintained if you use %option yylineno int yy_flex_debug; // only has effect with -d or "%option debug" }; } #endif #if defined(yyFlexLexer) || ! defined(yyFlexLexerOnce) // Either this is the first time through (yyFlexLexerOnce not defined), // or this is a repeated include to define a different flavor of // yyFlexLexer, as discussed in the flex man page. #define yyFlexLexerOnce class yyFlexLexer : public FlexLexer { public: // arg_yyin and arg_yyout default to the cin and cout, but we // only make that assignment when initializing in yylex(). yyFlexLexer( istream* arg_yyin = 0, ostream* arg_yyout = 0 ); virtual ~yyFlexLexer(); void yy_switch_to_buffer( struct yy_buffer_state* new_buffer ); struct yy_buffer_state* yy_create_buffer( istream* s, int size ); void yy_delete_buffer( struct yy_buffer_state* b ); void yyrestart( istream* s ); virtual int yylex(FLEXFIX); virtual void switch_streams( istream* new_in, ostream* new_out ); protected: virtual int LexerInput( char* buf, int max_size ); virtual void LexerOutput( const char* buf, int size ); virtual void LexerError( const char* msg ); void yyunput( int c, char* buf_ptr ); int yyinput(); void yy_load_buffer_state(); void yy_init_buffer( struct yy_buffer_state* b, istream* s ); void yy_flush_buffer( struct yy_buffer_state* b ); int yy_start_stack_ptr; int yy_start_stack_depth; int* yy_start_stack; void yy_push_state( int new_state ); void yy_pop_state(); int yy_top_state(); yy_state_type yy_get_previous_state(); yy_state_type yy_try_NUL_trans( yy_state_type current_state ); int yy_get_next_buffer(); istream* yyin; // input source for default LexerInput ostream* yyout; // output sink for default LexerOutput struct yy_buffer_state* yy_current_buffer; // yy_hold_char holds the character lost when yytext is formed. char yy_hold_char; // Number of characters read into yy_ch_buf. int yy_n_chars; // Points to current character in buffer. char* yy_c_buf_p; int yy_init; // whether we need to initialize int yy_start; // start state number // Flag which is used to allow yywrap()'s to do buffer switches // instead of setting up a fresh yyin. A bit of a hack ... int yy_did_buffer_switch_on_eof; // The following are not always needed, but may be depending // on use of certain flex features (like REJECT or yymore()). yy_state_type yy_last_accepting_state; char* yy_last_accepting_cpos; yy_state_type* yy_state_buf; yy_state_type* yy_state_ptr; char* yy_full_match; int* yy_full_state; int yy_full_lp; int yy_lp; int yy_looking_for_trail_begin; int yy_more_flag; int yy_more_len; int yy_more_offset; int yy_prev_more_offset; }; #endif bisonc++-4.13.01/documentation/examples/bison++Example.ORG/MyScanner.l0000644000175000017500000000136412633316117024223 0ustar frankfrank%{ #ifndef FLEXFIX #define FLEXFIX YY_MyParser_STYPE *val #define FLEXFIX2 val #endif #include "MyParser.h" // Make sure the flexer can communicate with bison++ //using return values %} digit [0-9] integer [1-9]{digit}* ws [ \t\n]+ %% {ws} { /* no action */ } {integer} { val->num = atoi(yytext); return MyParser::INTEGER; } "AND" {return(MyParser::AND);} "OR" {return(MyParser::OR);} "NOT" {return(MyParser::NOT);} "TRUE" {val->statement=true; return MyParser::BOOLEAN; } "FALSE" {val->statement=false; return MyParser::BOOLEAN; } "-" {return(MyParser::MINUS);} "+" {return(MyParser::PLUS);} "(" {return(MyParser::LPARA);} ")" {return(MyParser::RPARA);} <> { yyterminate();} %% int yywrap() { return(1); }bisonc++-4.13.01/documentation/examples/bison++Example.ORG/Makefile0000644000175000017500000000113312633316117023601 0ustar frankfrank .SUFFIXES : .cc .y .l $(SUFFIXES) .cc.o : g++ -g -I . -I$(CENTERCCLIBDIR)/incl -c $*.cc .y.cc : bison++ -d -o $*.cc -h $*.h $*.y .l.cc : flex++ -o$*.cc $*.l .y.h : bison++ -d -o $*.cc -h $*.h $*.y .l.h : flex++ -o$*.cc $*.l # COMPILER SAMPLE MyCompiler.o : MyCompiler.cc MyParser.h MyScanner.h MyParser.o : MyParser.cc MyParser.h MyScanner.o : MyScanner.cc MyScanner.h MyParser.h MyParser.cc : MyParser.y MyScanner.cc : MyScanner.l MyParser.h : MyParser.y MyScanner.h : MyScanner.l compiler : MyCompiler.o MyParser.o MyScanner.o g++ -o $@ MyCompiler.o MyParser.o MyScanner.o bisonc++-4.13.01/documentation/examples/bison++Example.ORG/test.txt0000644000175000017500000000001012633316117023652 0ustar frankfrank1+2+3+4 bisonc++-4.13.01/documentation/examples/bison++Example.ORG/test2.txt0000644000175000017500000000003112633316117023737 0ustar frankfrank(TRUE OR FALSE) AND FALSEbisonc++-4.13.01/documentation/manual/0000755000175000017500000000000012634617636016413 5ustar frankfrankbisonc++-4.13.01/documentation/manual/concepts/0000755000175000017500000000000012633316117020216 5ustar frankfrankbisonc++-4.13.01/documentation/manual/concepts/stages.yo0000644000175000017500000000402612633316117022057 0ustar frankfrankThe actual language-design process using b(), from grammar specification to a working compiler or interpreter, has these parts: itemization( it() Formally specify the grammar in a form recognized by b() (see chapter ref(GRAMMARFILES)). For each grammatical rule in the language, describe the action that is to be taken when an instance of that rule is recognized. The action is described by a sequence of bf(C++) statements. it() Run b() on the grammar to produce the parser class and parsing member function. it() Write a lexical scanner to process input and pass tokens to the parser. The lexical scanner may be written by hand in bf(C++) (see section ref(LEX)) (it could also be produced using, e.g., bf(flex)(1), but the use of bf(flex)(1) is not discussed in this manual). it() All the parser's members (except for the member tt(parse())) and its support functions must be implemented by the programmer. Of course, additional member functions should also be declared in the parser class' header. At the very least the member tt(int lex()) calling the lexecal scanner to produce the next available token em(must) be implemented (although a standardized implementation can also be generated by b()). The member tt(lex()) is called by tt(parse()) (support functions) to obtain the next available token. The member function tt(void error(char const *msg)) may also be re-implemented by the programmer, but a basic in-line implementation is provided by default. The member function tt(error()) is called when tt(parse()) detects (syntactic) errors. it() The parser can now be used in a program. A very simple example would be: verb( int main() { Parser parser; return parser.parse(); } ) ) Once the software has been implemented, the following steps are required to create the final program: itemization( it() Compile the parsing function generated by b(), as well as any other source files you have implemented. it() Link the object files to produce the final program. ) bisonc++-4.13.01/documentation/manual/concepts/reentrant.yo0000644000175000017500000000172212633316117022573 0ustar frankfrankA computer program or routine is described as reentrant if it can be safely called recursively and concurrently from multiple processes. To be reentrant, a function must hold no static data, must not return a pointer to static data, must work only on the data provided to it by the caller, and must not call non-reentrant functions (Source: lurl(http://en.wikipedia.org/wiki/Reentrant)). Currently, b() generates a parsing member which may or may not be reentrant, depending on whether or not the option link(--thread-safe)(OPTIONS) is specified. The source file generated by b() containing the parsing member function not only contains this function, but also various tables (e.g., state transition tables) defined in the anonymous name space. When the option tt(--thread-safe) is provided, these tables are tt(const) tables: their elements are not changed by the parsing function and so the parsing function, as it only manipulates its own local data, becomes reentrant. bisonc++-4.13.01/documentation/manual/concepts/intro.yo0000644000175000017500000000033712633316117021725 0ustar frankfrankThis chapter introduces many of the basic concepts without which the details of b() do not make sense. If you do not already know how to use b(), bison++ or bison, it is advised to start by reading this chapter carefully. bisonc++-4.13.01/documentation/manual/concepts/formal.yo0000644000175000017500000000323312633316117022050 0ustar frankfrankA formal grammar is a mathematical construct. To define the language for B(), you must write a file expressing the grammar in b() syntax: a em(B() grammar file). See chapter ref(GRAMMARFILES). A nonterminal symbol in the formal grammar is represented in b() input as an identifier, like an identifier in bf(C++). By convention, it should be in lower case, such as tt(expr), tt(stmt) or tt(declaration). The b() representation for a terminal symbol is also called a token type. Token types as well can be represented as bf(C++)-like identifiers. By convention, these identifiers should be upper case to distinguish them from nonterminals: for example, tt(INTEGER), tt(IDENTIFIER), tt(IF) or tt(RETURN). A terminal symbol that stands for a particular keyword in the language should be named after that keyword converted to upper case. The terminal symbol tt(error) is reserved for error recovery. See section ref(SYMBOLS), which also describes the current restrictions on the names of terminal symbols. A terminal symbol can also be represented as a character literal, just like a bf(C++) character constant. You should do this whenever a token is just a single character (parenthesis, plus-sign, etc.): use that same character in a literal as the terminal symbol for that token. The grammar rules also have an expression in b() syntax. For example, here is the b() rule for a bf(C++) return statement. The semicolon in quotes is a literal character token, representing part of the bf(C++) syntax for the statement; the naked semicolon, and the colon, are b() punctuation used in every rule. verb( stmt: RETURN expr ';' ; ) See section ref(RULES). bisonc++-4.13.01/documentation/manual/concepts/actions.yo0000644000175000017500000000210212633316117022222 0ustar frankfrankIn order to be useful, a program must do more than parse input; it must also produce some output based on the input. In a b() grammar, a grammar rule can have an action made up of bf(C++) statements. Each time the parser recognizes a match for that rule, the action is executed. See section ref(ACTIONS). Most of the time, the purpose of an action is to compute the semantic value of the whole construct from the semantic values of its parts. For example, suppose we have a rule which says an expression can be the sum of two expressions. When the parser recognizes such a sum, each of the subexpressions has a semantic value which describes how it was built up. The action for this rule should create a similar sort of value for the newly recognized larger expression. For example, here is a rule that says an expression can be the sum of two subexpressions: verb( expr: expr '+' expr { $$ = $1 + $3; } ; ) The action says how to produce the semantic value of the sum expression from the values of the two subexpressions. bisonc++-4.13.01/documentation/manual/concepts/semantic.yo0000644000175000017500000000363012633316117022374 0ustar frankfrankA formal grammar selects tokens only by their classifications: for example, if a rule mentions the terminal symbol `integer constant', it means that em(any) integer constant is grammatically valid in that position. The precise value of the constant is irrelevant to how to parse the input: if `tt(x + 4)' is grammatical then `tt(x + 1)' or `tt(x + 3989)' is equally grammatical. But the precise value is very important for what the input means once it is parsed. A compiler is useless if it fails to distinguish between 4, 1 and 3989 as constants in the program! Therefore, each token in a b() grammar has both a token type and a em(semantic value). See section ref(DEFSEM) for details. The token type is a terminal symbol defined in the grammar, such as tt(INTEGER), tt(IDENTIFIER) or 'tt(,)'. It tells everything you need to know to decide where the token may validly appear and how to group it with other tokens. The grammar rules know nothing about tokens except their types. The semantic value has all the rest of the information about the meaning of the token, such as the value of an integer, or the name of an identifier. (A token such as 'tt(,)' which is just punctuation doesn't need to have any semantic value.) For example, an input token might be classified as token type tt(INTEGER) and have the semantic value 4. Another input token might have the same token type tt(INTEGER) but value 3989. When a grammar rule says that tt(INTEGER) is allowed, either of these tokens is acceptable because each is an tt(INTEGER). When the parser accepts the token, it keeps track of the token's semantic value. Each grouping can also have a semantic value as well as its nonterminal symbol. For example, in a calculator, an expression typically has a semantic value that is a number. In a compiler for a programming language, an expression typically has a semantic value that is a tree structure describing the meaning of the expression. bisonc++-4.13.01/documentation/manual/concepts/output.yo0000644000175000017500000000654512633316117022141 0ustar frankfrankWhen you run b(), you give it a b() grammar file as input. The output, however, defines a bf(C++) em(class), in which several em(members) have already been defined. Therefore, the em(output) of b() consists of em(header files) and a bf(C++) source file, defining a member (tt(parse())) that parses the language described by the grammar. The class and its implementation is called a b() em(parser class). Keep in mind that the B() utility and the b() parser class are two distinct pieces of software: the b() utility is a program whose output is the b() parser class that becomes part of your program. More specifically, b() generates the following files from a b() grammar file: itemization( it() A em(baseclass header), which can be included by em(lexical scanners) (see below), primarily defining the em(lexical tokens) that the parser expects the lexical scanner to return; it() A em(class header), defining the b() parser class interface; it() An em(implementation header), which is used to declare all entities which are em(only) used by b()'s parser class em(implementation) (and not required by the remaining parts of your program); it() The em(parsing member), actually performing the parsing of a provided input according to the rules of the defined b() grammar (that you, as b()'s user, defined). ) The job of the b() parsing member is to group tokens into groupings according to the grammar rules--for example, to build identifiers and operators into expressions. As it does this, it runs the actions for the grammar rules it uses. In bf(C++) the tokens should be produced by an object called the em(lexical analyzer) or em(lexical scanner) that you must supply in some fashion (such as by writing it in bf(C++)). The b() parsing member requests the next token from the lexical analyzer each time it wants a new token. The parser itself doesn't know what is "inside" the tokens (though their semantic values may reflect this). Typically the lexical analyzer makes the tokens by parsing characters of text, but b() does not depend on this. See section ref(LEX). The b() parsing function is bf(C++) code defining a member function named tt(parse()) which implements that grammar. This parsing function nor the parser object for which it is called does make a complete bf(C++) program: you must supply some additional details. One `detail' to be supplied is is the lexical analyzer. The parser class itself declares several more members which must be defined when used. One of these additional members is an error-reporting function which the parser calls to report an error. Simple default, yet sensible, implementations for these additional members may be generated by b(). Having constructed a parser class and a lexical scanner class, em(objects) of these classes must be defined in a complete bf(C++) program. Usually such objects are defined in a function called tt(main()); you have to provide this, and arrange for it to call the parser's tt(parse()) function, or the parser will never run. See chapter ref(INTERFACE). Note that, different from conventions used by Bison and Bison++, there is no special name convention requirement anymore imposed by b(). In particular, there is em(no) need to begin all variable and function names used in the b() parser with `yy' or `YY' anymore. However, some name restrictions on symbolic tokens exist. See section ref(IMPROPER) for details. bisonc++-4.13.01/documentation/manual/concepts/layout.yo0000644000175000017500000000330612633316117022106 0ustar frankfrankThe input file for the b() utility is a b() grammar file. Different from Bison++ and Bison grammar files, b() grammar file consist of only two sections. The general form of a b() grammar file is as follows: verb( Bisonc++ directives %% Grammar rules ) Readers familiar with Bison may note that there is no em(C declaration section) and no section to define em(Additional C code). With b() these sections are superfluous since, due to the fact that a b() parser is a class, all additional code required for the parser's implementation can be incorporated into the parser class itself. Also, bf(C++) classes normally only require declarations that can be defined in the classes' header files, so also the `additional C code' section could be omitted from the b() grammar file. The `%%' is a punctuation that appears in every b() grammar file to separate the two sections. The b() directives section is used to declare the names of the terminal and nonterminal symbols, and may also describe operator precedence and the data types of semantic values of various symbols. Furthermore, this section is also used to specify b() directives. These b() directives are used to define, e.g., the name of the generated parser class and a namespace in which the parser class will be defined. The grammar rules define how to construct each nonterminal symbol from its parts. One special directive is availble that may be used in the directives section and in the grammar rules section. This directive is tt(%include), allowing you to split long grammar specification files into smaller, more comprehensible and accessible chunks. The tt(%include) directive is discussed in more detail in section ref(INCLUDE). bisonc++-4.13.01/documentation/manual/concepts/languages.yo0000644000175000017500000001131512633316117022536 0ustar frankfrankIn order for b() to parse a language, it must be described by a em(context-free grammar). This means that you specify one or more em(syntactic groupings) and give rules for constructing them from their parts. For example, in the C language, one kind of grouping is called an `expression'. One rule for making an expression might be, "An expression can be made of a minus sign and another expression". Another would be, "An expression can be an integer". As you can see, rules are often recursive, but there must be at least one rule which leads out of the recursion. The most common formal system for presenting such rules for humans to read is em(Backus-Naur Form) or `BNF', which was developed in order to specify the language Algol 60. Any grammar expressed in BNF is a context-free grammar. The input to b() is essentially machine-readable BNF. Not all context-free languages can be handled by b(), only those that are LALR(1). In brief, this means that it must be possible to tell how to parse any portion of an input string with just a single token of look-ahead. Strictly speaking, that is a description of an LR(1) grammar, and LALR(1) involves additional restrictions that are hard to explain simply; but it is rare in actual practice to find an LR(1) grammar that fails to be LALR(1). See section ref(MYSTERIOUS) for more information on this. In the formal grammatical rules for a language, each kind of syntactic unit or grouping is named by a em(symbol). Those which are built by grouping smaller constructs according to grammatical rules are called em(nonterminal symbols); those which can't be subdivided are called em(terminal symbols) or em(token types). We call a piece of input corresponding to a single terminal symbol a token, and a piece corresponding to a single nonterminal symbol a em(grouping). We can use the bf(C++) language as an example of what symbols, terminal and nonterminal, mean. The tokens of bf(C++) are identifiers, constants (numeric and string), and the various keywords, arithmetic operators and punctuation marks. So the terminal symbols of a grammar for bf(C++) include `identifier', `number', `string', plus one symbol for each keyword, operator or punctuation mark: `if', `return', `const', `static', `int', `char', `plus-sign', `open-brace', `close-brace', `comma' and many more. (These tokens can be subdivided into characters, but that is a matter of lexicography, not grammar.) Here is a simple bf(C++) function subdivided into tokens: verb( int square(int x) // keyword `int', identifier, open-paren, // keyword `int', identifier, close-paren { // open-brace return x * x; // keyword `return', identifier, // asterisk, identifier, semicolon } // close-brace ) The syntactic groupings of bf(C++) include the expression, the statement, the declaration, and the function definition. These are represented in the grammar of bf(C++) by nonterminal symbols `expression', `statement', `declaration' and `function definition'. The full grammar uses dozens of additional language constructs, each with its own nonterminal symbol, in order to express the meanings of these four. The example above is a function definition; it contains one declaration, and one statement. In the statement, each `x' is an expression and so is `x * x'. Each nonterminal symbol must have grammatical rules showing how it is made out of simpler constructs. For example, one kind of bf(C++) statement is the return statement; this would be described with a grammar rule which reads informally as follows: quote( A `statement' can be made of a `return' keyword, an `expression' and a `semicolon'. ) There would be many other rules for `statement', one for each kind of statement in bf(C++). One nonterminal symbol must be distinguished as the special one which defines a complete utterance in the language. It is called the em(start symbol). In a compiler, this means a complete input program. In the bf(C++) language, the nonterminal symbol `sequence of definitions and declarations' plays this role. For example, `1 + 2' is a valid bf(C++) expression--a valid part of a bf(C++) program--but it is not valid as an em(entire) bf(C++) program. In the context-free grammar of bf(C++), this follows from the fact that `expression' is not the start symbol. The b() parser reads a sequence of tokens as its input, and groups the tokens using the grammar rules. If the input is valid, the end result is that the entire token sequence reduces to a single grouping whose symbol is the grammar's start symbol. If we use a grammar for bf(C++), the entire input must be a `sequence of definitions and declarations'. If not, the parser reports a syntax error. bisonc++-4.13.01/documentation/manual/class/0000755000175000017500000000000012633316117017505 5ustar frankfrankbisonc++-4.13.01/documentation/manual/class/features.yo0000644000175000017500000000326612633316117021703 0ustar frankfrankHere is an overview of special syntactic constructions that may be used inside action blocks: itemization( itt($$): This acts like a variable that contains the semantic value for the grouping made by the current rule. See section ref(ACTIONS). itt($n): This acts like a variable that contains the semantic value for the n-th component of the current rule. See section ref(ACTIONS). itt($$): This is like tt($$), but it specifies alternative tt(typealt) in the union specified by the tt(%union) directive. See sections ref(SEMANTICTYPES) and ref(MORETYPES). itt($n): This is like tt($n) but it specifies an alternative tt(typealt) in the union specified by the tt(%union) directive. See sections ref(SEMANTICTYPES) and ref(MORETYPES). itt(@n): This acts like a structure variable containing information on the line numbers and column numbers of the nth component of the current rule. The default structure is defined like this (see section ref(LSPNEEDED)): verb( struct LTYPE__ { int timestamp; int first_line; int first_column; int last_line; int last_column; char *text; }; ) Thus, to get the starting line number of the third component, you would use tt(@3.first_line). In order for the members of this structure to contain valid information, you must make sure the lexical scanner supplies this information about each token. If you need only certain fields, then the lexical scanner only has to provide those fields. Be advised that using this or corresponding (custom-defined, see sections ref(LTYPE) and ref(LOCSTRUCT)) may slow down the parsing process noticeably. )bisonc++-4.13.01/documentation/manual/class/lex.yo0000644000175000017500000000444312633316117020653 0ustar frankfrank The tt(int lex()) private member function is called by the tt(parse()) member to obtain the next lexical token. By default it is not implemented, but the tt(%scanner) directive (see section ref(SCANNER)) may be used to pre-implement a standard interface to a lexical analyzer. The tt(lex()) member function interfaces to the lexical scanner, and it is expected to return the next token produced by the lexical scanner. This token may either be a plain character or it may be one of the symbolic tokens defined in the bf(Parser::Tokens) enumeration. Any zero or negative token value is interpreted as `end of input', causing tt(parse()) to return. The tt(lex()) member function may be implemented in various ways: itemization( it() By default, if the tt(--scanner) option or tt(%scanner) directive is provided bic() assumes that it should interface to the scanner generated by bf(flexc++)(1). In this case, the scanner token function is called as verb( d_scanner.lex() ) and the scanner's matched text function is called as verb( d_scanner.matched() ) it() tt(lex()) may itself implement a lexical analyzer (a em(scanner)). This may actually be a useful option when the input offered to the program using b()'s parser class is not overly complex. This approach was used when implementing the earlier examples (see sections ref(RPNLEX) and ref(MFLEX)). it() tt(lex()) may call a external function or member function of class implementing a lexical scanner, and return the information offered by this external function. When using a class, an object of that class could also be defined as additional data member of the parser (see the next alternative). This approach can be followed when generating a lexical scanner from a lexical scanner generating tool like bf(lex)(1) or bf(flex++)(1). The latter program allows its users to generate a scanner em(class). it() To interface bic() to code generated by bf(flex)(1), the tt(--flex) option or tt(%flex) directive can be used in combination with the tt(--scanner) directive or tt(%scanner) option. In this case the scanner token function is called as verb( d_scanner.yylex() ) and the scanner's matched text function is called as verb( d_scanner.YYText() ) ) bisonc++-4.13.01/documentation/manual/class/intro.yo0000644000175000017500000000406312633316117021214 0ustar frankfrankB() generates a bf(C++) em(class), rather than a em(function) like Bison. B()'s class is a plain bf(C++) class and not a fairly complex macro-based class like the one generated by Bison++. The bf(C++) class generated by b() does not have (need) em(virtual) members. Its essential member (the member tt(parse())) is generated from the grammar specification and so the software engineer will therefore hardly ever feel the need to override that function. All but a few of the remaining predefined members have very clear definitions and meanings as well, making it unlikely that they should ever require overriding. It is likely that members like tt(lex()) and/or tt(error()) need dedicated definitions with different parsers generated by Bison++; but then again: while defining the grammar the definition of the associated support members is a natural extension of defining the grammar, and can be realized em(in parallel) with defining the grammar, in practice not requiring any virtual members. By not defining (requiring) virtual members the parser's class organization is simplified, and calling non-virtual members will be just a trifle faster than calling these member functions as virtual functions. In this chapter all available members and features of the generated parser class are discussed. Having read this chapter you should be able to use the generated parser class in your program (using its public members) and to use its facilities in the actions defined for the various production rules and/or use these facilities in additional class members that you might have defined yourself. In the remainder of this chapter the class's public members are first discussed, to be followed by the class's private members. While constructing the grammar all private members are available in the action parts of the grammaticalrules. Furthermore, any member (and so not just from the action blocks) may generate errors (thus initiating error recovery procedures) and may flag the (un)successful parsing of the information given to the parser (terminating the parsing function tt(parse())). bisonc++-4.13.01/documentation/manual/class/privdata.yo0000644000175000017500000001234712633316117021677 0ustar frankfrank The following private members can be used by members of parser classes generated by b(). All data members are actually protected members inherited from the parser's base class. itemization( it() bf(size_t d_acceptedTokens__):nl() Counts the number of accepted tokens since the start of the tt(parse()) function or since the last detected syntactic error. It is initialized to tt(d_requiredTokens__) to allow an early error to be detected as well. it() bf(bool d_debug__):nl() When the bf(debug) option has been specified, this variable (bf(true) by default) determines whether debug information is actually displayed. it() bf(LTYPE__ d_loc__):nl() The location type value associated with a terminal token. It can be used by, e.g., lexical scanners to pass location information of a matched token to the parser in parallel with a returned token. It is available only when bf(%lsp-needed, %ltype) or bf(%locationstruct) has been defined. nl() Lexical scanners may be offered the facility to assign a value to this variable in parallel with a returned token. In order to allow a scanner access to bf(d_loc__), bf(d_loc__)'s address should be passed to the scanner. This can be realized, for example, by defining a member bf(void setLoc(LTYPE__ *loc)) in the lexical scanner, which is then called from the parser's constructor as follows: verb( d_scanner.setSLoc(&d_loc__); ) Subsequently, the lexical scanner may assign a value to the parser's bf(d_loc__) variable through the pointer to bf(d_loc__) stored inside the lexical scanner. it() bf(LTYPE__ d_lsp__):nl() The location stack pointer. Used internally. it() bf(size_t d_nErrors__):nl() The number of errors counted by tt(parse()). It is initialized by the parser's base class initializer, and is updated while tt(parse()) executes. When tt(parse()) has returned it contains the total number of errors counted by tt(parse()). Errors are not counted if suppressed (i.e., if tt(d_acceptedTokens__) is less than tt(d_requiredTokens__)). it() bf(size_t d_nextToken__):nl() A pending token. Do not modify. it() bf(size_t d_requiredTokens__):nl() Defines the minimum number of accepted tokens that the tt(parse()) function must have processed before a syntactic error can be generated. it() bf(int d_state__):nl() The current parsing state. Do not modify. it() bf(int d_token__):nl() The current token. Do not modify. it() label(DVAL) bf(STYPE__ d_val__):nl() The semantic value of a returned token or non-terminal symbol. With non-terminal tokens it is assigned a value through the action rule's symbol bf($$). Lexical scanners may be offered the facility to assign a semantic value to this variable in parallel with a returned token. In order to allow a scanner access to bf(d_val__), bf(d_val__)'s address should be passed to the scanner. This can be realized, for example, by defining a member bf(void setSval(STYPE__ *)) in the lexical scanner, which is then called from the parser's constructor as follows: verb( d_scanner.setSval(&d_val__); ) Subsequently, the lexical scanner may assign a value to the parser's bf(d_val__) variable through the pointer to bf(d_val__) stored inside the lexical scanner. Note that in some cases this approach em(must) be used to make available the correct semantic value to the parser. In particular, when a grammar state defines multiple reductions, depending on the next token, the reduction's action only takes place following the retrieval of the next token, thus losing the initially matched token text. As an example, consider the following little grammar: verb( expr: name | ident '(' ')' | NR ; name: IDENT ; ident: IDENT ; ) Having recognized tt(IDENT) two reductions are possible: to tt(name) and to tt(ident). The reduction to tt(ident) is appropriate when the next token is tt(CHAR(40)), otherwise the reduction to tt(name) is performed. So, the parser asks for the next token, thereby destroying the text matching tt(IDENT) before tt(ident) or tt(name)'s actions are able to save the text themselves. To enure the availability of the text matching tt(IDENT) is situations like these the em(scanner) must assign the proper semantic value when it recognizes a token. Consequently the parser's tt(d_val__) data member must be made available to the scanner. If tt(STYPE__) is a polymorphic semantic value, direct assignment of values to tt(d_val__) is not possible. In that case em(tagged assignment) must be used+IFDEF(MANUAL)(, as explained in section ref(POLYTYPE))(). it() bf(LTYPE__ d_vsp__):nl() The semantic value stack pointer. Do not modify. ) bisonc++-4.13.01/documentation/manual/class/privenum.yo0000644000175000017500000000321212633316117021721 0ustar frankfrank The following enumerations and types can be used by members of parser classes generated by bic(). They are actually protected members inherited from the parser's base class. itemization( it() bf(Base::ErrorRecovery__):nl() This enumeration defines two values: verb( DEFAULT_RECOVERY_MODE__, UNEXPECTED_TOKEN__ ) The tt(DEFAULT_RECOVERY_MODE__) terminates the parsing process. The non-default recovery procedure is available once an tt(error) token is used in a production rule. When the parsing process throws tt(UNEXPECTED_TOKEN__) the recovery procedure is started (i.e., it is started whenever a syntactic error is encountered or tt(ERROR)tt(()) is called). The recovery procedure consists of (1) looking for the first state on the state-stack having an error-production, followed by (2) handling all state transitions that are possible without retrieving a terminal token. Then, in the state requiring a terminal token and starting with the initial unexpected token (3) all subsequent terminal tokens are ignored until a token is retrieved which is a continuation token in that state. If the error recovery procedure fails (i.e., if no acceptable token is ever encountered) error recovery falls back to the default recovery mode (i.e., the parsing process is terminated). it() bf(Base::Return__):nl() This enumeration defines two values: verb( PARSE_ACCEPT = 0, PARSE_ABORT = 1 ) (which are of course the tt(parse) function's return values). ) bisonc++-4.13.01/documentation/manual/class/public.yo0000644000175000017500000000551612633316117021343 0ustar frankfrank The following public members and types are available to users of the parser classes generated by bic() (parser class-name prefixes (e.g., tt(Parser::)) prefixes are silently implied): itemization( it() bf(LTYPE__):nl() The parser's location type (user-definable). Available only when either tt(%lsp-needed, %ltype) or tt(%locationstruct) has been declared. it() bf(STYPE__):nl() The parser's stack-type (user-definable), defaults to bf(int). it() bf(Tokens__):nl() The enumeration type of all the symbolic tokens defined in the grammar file (i.e., bic()'s input file). The scanner should be prepared to return these symbolic tokens. Note that, since the symbolic tokens are defined in the parser's class and not in the scanner's class, the lexical scanner must prefix the parser's class name to the symbolic token names when they are returned. E.g., tt(return Parser::IDENT) should be used rather than tt(return IDENT). it() bf(int parse()):nl() The parser's parsing member function. It returns 0 when parsing was successfully completed; 1 if errors were encountered while parsing the input. it() bf(void setDebug(bool mode)):nl() This member can be used to activate or deactivate the debug-code compiled into the parsing function. It is always defined but is only operational if the tt(%debug) directive or tt(--debug) option was specified. When debugging code has been compiled into the parsing function, it is em(not) active by default. To activate the debugging code, use tt(setDebug(true)). This member can be used to activate or deactivate the debug-code compiled into the parsing function. It is available but has no effect if no debug code has been compiled into the parsing function. When debugging code has been compiled into the parsing function, it is active by default, but debug-code is suppressed by calling tt(setDebug(false)). ) When the tt(%polymorphic) directive is used: itemization( it() bf(Meta__):nl() Templates and classes that are required for implementing the polymorphic semantic values are all declared in the tt(Meta__) namespace. The tt(Meta__) namespace itself is nested under the namespace that may have been declared by the tt(%namespace) directive. it() bf(Tag__):nl() The (strongly typed) tt(enum class Tag__) contains all the tag-identifiers specified by the tt(%polymorphic) directive. It is declared outside of the Parser's class, but within the namespace that may have been declared by the tt(%namespace) directive. ) bisonc++-4.13.01/documentation/manual/class/anonymous.yo0000644000175000017500000000676112633316117022120 0ustar frankfrank In the file defining the tt(parse) function the following types and variables are defined in the anonymous namespace. These are mentioned here for the sake of completeness, and are not normally accessible to other parts of the parser. itemization( it() bf(char const author[]):nl() Defining the name and e-mail address of Bic()'s author. it() bf(ReservedTokens):nl() This enumeration defines some token values used internally by the parsing functions. They are: verb( PARSE_ACCEPT = 0, _UNDETERMINED_ = -2, _EOF_ = -1, _error_ = 256, ) These tokens are used by the parser to determine whether another token should be requested from the lexical scanner, and to handle error-conditions. it() bf(StateType):nl() This enumeration defines several moe token values used internally by the parsing functions. They are: verb( NORMAL, ERR_ITEM, REQ_TOKEN, ERR_REQ, // ERR_ITEM | REQ_TOKEN DEF_RED, // state having default reduction ERR_DEF, // ERR_ITEM | DEF_RED REQ_DEF, // REQ_TOKEN | DEF_RED ERR_REQ_DEF // ERR_ITEM | REQ_TOKEN | DEF_RED ) These tokens are used by the parser to define the types of the various states of the analyzed grammar. it() bf(PI__) (Production Info):nl() This tt(struct) provides information about production rules. It has two fields: tt(d_nonTerm) is the identification number of the production's non-terminal, tt(d_size) represents the number of elements of the productin rule. it() bf(static PI__ s_productionInfo):nl() Used internally by the parsing function. it() bf(SR__) (Shift-Reduce Info):nl() This tt(struct) provides the shift/reduce information for the various grammatic states. tt(SR__) values are collected in arrays, one array per grammatic state. These array, named tt(s_)tt(), where tt is a state number are defined in the anonymous namespace as well. The tt(SR__) elements consist of two unions, defining fields that are applicable to, respectively, the first, intermediate and the last array elements.nl() The first element of each array consists of (1st field) a tt(StateType) and (2nd field) the index of the last array element; intermediate elements consist of (1st field) a symbol value and (2nd field) (if negative) the production rule number reducing to the indicated symbol value or (if positive) the next state when the symbol given in the 1st field is the current token; the last element of each array consists of (1st field) a placeholder for the current token and (2nd field) the (negative) rule number to reduce to by default or the (positive) number of an error-state to go to when an erroneous token has been retrieved. If the 2nd field is zero, no error or default action has been defined for the state, and error-recovery is attepted. it() bf(STACK_EXPANSION):nl() An enumeration value specifying the number of additional elements that are added to the state- and semantic value stacks when full. it() bf(static SR__ s_[]):nl() Here, tt() is a numerical value representing a state number. Used internally by the parsing function. it() bf(static SR__ *s_state[]):nl() Used internally by the parsing function. ) bisonc++-4.13.01/documentation/manual/class/privmembers.yo0000644000175000017500000001652012633316117022415 0ustar frankfrank The following members can be used by members of parser classes generated by bic(). When prefixed by tt(Base::) they are actually protected members inherited from the parser's base class. Members for which the phrase ``Used internally'' is used should not be called by user-defined code. itemization( it() bf(Base::ParserBase()):nl() Used internally. it() bf(void Base::ABORT() const throw(Return__)):nl() This member can be called from any member function (called from any of the parser's action blocks) to indicate a failure while parsing thus terminating the parsing function with an error value 1. Note that this offers a marked extension and improvement of the macro tt(YYABORT) defined by bf(bison++) in that tt(YYABORT) could not be called from outside of the parsing member function. it() bf(void Base::ACCEPT() const throw(Return__)):nl() This member can be called from any member function (called from any of the parser's action blocks) to indicate successful parsing and thus terminating the parsing function. Note that this offers a marked extension and improvement of the macro tt(YYACCEPT) defined by bf(bison++) in that tt(YYACCEPT) could not be called from outside of the parsing member function. it() bf(void Base::clearin+nop()()):nl() This member replaces bf(bison)(++)'s macro tt(yyclearin) and causes bf(bisonc++) to request another token from its tt(lex+nop()()) member, even if the current token has not yet been processed. It is a useful member when the parser should be reset to its initial state, e.g., between successive calls of tt(parse). In this situation the scanner must probably be reloaded with new information as well. it() bf(bool Base::debug() const):nl() This member returns the current value of the debug variable. it() bf(void Base::ERROR+nop()() const throw(ErrorRecovery__)):nl() This member can be called from any member function (called from any of the parser's action blocks) to generate an error, and results in the parser executing its error recovery code. Note that this offers a marked extension and improvement of the macro tt(YYERROR) defined by bf(bison++) in that tt(YYERROR) could not be called from outside of the parsing member function. it() bf(void error(char const *msg)):nl() By default implemented inline in the tt(parser.ih) internal header file, it writes a simple message to the standard error stream. It is called when a syntactic error is encountered, and its default implementation may safely be altered. it() bf(void errorRecovery__+nop()()):nl() Used internally. it() bf(void Base::errorVerbose__+nop()()):nl() Used internally. it() bf(void exceptionHandler__(std::exception const &exc)):nl() This member's default implementation is provided inline in the tt(parser.ih) internal header file. It consists of a mere tt(throw) statement, rethrowing a caught exception. The tt(parse) member function's body essentially consists of a tt(while) statement, in which the next token is obtained via the parser's tt(lex) member. This token is then processed according to the current state of the parsing process. This may result in executing actions over which the parsing process has no control and which may result in exceptions being thrown. Such exceptions do not necessarily have to terminate the parsing process: they could be thrown by code, linked to the parser, that simply checks for semantic errors (like divisions by zero) throwing exceptions if such errors are observed. The member tt(exceptionHandler__) receives and may handle such exceptions without necessarily ending the parsing process. It receives any tt(std::exception) thrown by the parser's actions, as though the action block itself was surrounded by a tt(try ... catch) statement. It is of course still possible to use an explicit tt(try ... catch) statement within action blocks. However, tt(exceptionHandler__) can be used to factor out code that is common to various action blocks. The next example shows an explicit implementation of tt(exceptionHandler__): any tt(std::exception) thrown by the parser's action blocks is caught, showing the exception's message, and increasing the parser's error count. After this parsing continues as if no exception had been thrown: verb( void Parser::exceptionHandler__(std::exception const &exc) { std::cout << exc.what() << '\n'; ++d_nErrors__; } ) bf(Note:) Parser-class header files (e.g., Parser.h) and parser-class internal header files (e.g., Parser.ih) generated with bic() < 4.02.00 require two hand-modifications when using bic() >= 4.02.00: In Parser.h, just below the declaration verb( void print__(); ) add: verb( void exceptionHandler__(std::exception const &exc); ) In Parser.ih, assuming the name of the generated class is `Parser', add the following member definition (if a namespace is used: within the namespace's scope): verb( inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } ) it() bf(void executeAction+nop()(int)):nl() Used internally. it() bf(int lex()):nl() By default implemented inline in the tt(parser.ih) internal header file, it can be pre-implemented by bic() using the tt(scanner) option or directive (see above); alternatively it em(must) be implemented by the programmer. It interfaces to the lexical scanner, and should return the next token produced by the lexical scanner, either as a plain character or as one of the symbolic tokens defined in the tt(Parser::Tokens__) enumeration. Zero or negative token values are interpreted as `end of input'. it() bf(int lookup+nop()(bool)):nl() Used internally. it() bf(void nextToken+nop()()):nl() Used internally. it() bf(void Base::pop__+nop()()):nl() Used internally. it() bf(void Base::popToken__+nop()()):nl() Used internally. it() bf(void print__()()):nl() Used internally. it() bf(void print()):nl() By default implemented inline in the tt(parser.ih) internal header file, this member calls tt(print__) to display the last received token and corrseponding matched text. The tt(print__) member is only implemented if the tt(--print-tokens) option or tt(%print-tokens) directive was used when the parsing function was generated. Calling tt(print__) from tt(print) is unconditional, but can easily be controlled by the using program, by defining, e.g., a command-line option. it() bf(void Base::push__+nop()()):nl() Used internally. it() bf(void Base::pushToken__+nop()()):nl() Used internally. it() bf(void Base::reduce__+nop()()):nl() Used internally. it() bf(void Base::symbol__+nop()()):nl() Used internally. it() bf(void Base::top__+nop()()):nl() Used internally. ) bisonc++-4.13.01/documentation/manual/class.yo0000644000175000017500000000110612633316117020054 0ustar frankfrankincludefile(class/intro.yo) sect(Public Members and Types) includefile(class/public.yo) sect(Protected Enumerations and Types) includefile(class/privenum.yo) lsect(PRIVMEM)(Non-public Member Functions) includefile(class/privmembers.yo) lsubsect(LEX)(`lex()': Interfacing the Lexical Analyzer) includefile(class/lex.yo) lsect(PRIVDATA)(Protected Data Members) includefile(class/privdata.yo) sect(Types and Variables in the Anonymous Namespace) includefile(class/anonymous.yo) lsect(SPECIAL)(Summary of Special Constructions for Actions) includefile(class/features.yo) bisonc++-4.13.01/documentation/manual/examples.yo0000644000175000017500000000322012633316117020564 0ustar frankfrankincludefile(examples/intro) lsect(RPN)(rpn: a Reverse Polish Notation Calculator) includefile(examples/rpn.yo) subsect(Declarations for the `rpn' calculator) includefile(examples/rpndecl) subsect(Grammar rules for the `rpn' calculator) includefile(examples/rpngram) subsubsect(Explanation of `input') includefile(examples/rpninput) subsubsect(Explanation of `line') includefile(examples/rpnline) subsubsect(Explanation of `expr') includefile(examples/rpnexpr) lsubsect(RPNLEX)(The Lexical Scanner used by `rpn') includefile(examples/rpnlex) subsect(The Controlling Function `main()') includefile(examples/rpnmain) subsect(The error reporting member `error()') includefile(examples/rpnerror) subsect(Running Bisonc++ to generate the Parser) includefile(examples/rpnparser) subsect(Constructing and running `rpn') includefile(examples/rpnconstruct) lsect(CALC)(`calc': an Infix Notation Calculator) includefile(examples/calc.yo) lsect(ERROR)(Basic Error Recovery) includefile(examples/errors.yo) lsect(MFCALC)(`mfcalc': a Multi-Function Calculator) includefile(examples/mfcalc.yo) subsect(The Declaration Section for `mfcalc') includefile(examples/mfdecl.yo) subsect(Grammar Rules for `mfcalc') includefile(examples/mfgrammar.yo) subsect(The `mfcalc' Symbol- and Function Tables) includefile(examples/mftables.yo) lsubsect(MFLEX)(The revised `lex()' member) includefile(examples/mflex.yo) subsect(Constructing `mfcalc') includefile(examples/mfbuild.yo) lsect(EXERCISES)(Exercises) includefile(examples/exercises.yo) bisonc++-4.13.01/documentation/manual/bisonc++.yo0000644000175000017500000000235512633316117020361 0ustar frankfrankNOUSERMACRO(LALR LR parse lex error main name setLoc setDebug clearin debug throw print ParserBase ACCEPT ABORT setVal setSval yylex) DEFINEMACRO(b)(0)(bf(bisonc++)) DEFINEMACRO(B)(0)(bf(Bisonc++)) includefile(preamble) includefile(../../release.yo) htmlbodyopt(text)(#27408B) htmlbodyopt(bgcolor)(#FFFAF0) mailto(Frank B. Brokken: MYEMAIL) COMMENT(includefile(abstract)) IFDEF(html)( affiliation(center(AFFILIATION)) report(center(B() (Version _CurVers_) User Guide)) (center(Frank B. Brokken)) (center(_CurYrs_)) )( affiliation(AFFILIATION) report(B() (Version _CurVers_) User Guide) (Frank B. Brokken) (_CurYrs_) ) chapter(Introduction) includefile(introduction.yo) chapter(Conditions for Using Bisonc++) includefile(conditions.yo) chapter(Bisonc++ concepts) includefile(concepts.yo) chapter(Examples) includefile(examples.yo) lchapter(GRAMMARFILES)(Bisonc++ grammar files) includefile(grammar.yo) lchapter(INTERFACE)(The Generated Parser Class' Members) includefile(class.yo) lchapter(ALGORITHM)(The Bisonc++ Parser Algorithm) includefile(algorithm.yo) lchapter(RECOVERY)(Error Recovery) includefile(error.yo) lchapter(INVOKING)(Invoking Bisonc++) includefile(invoking.yo) bisonc++-4.13.01/documentation/manual/conditions.yo0000644000175000017500000000015612633316117021124 0ustar frankfrankincludefile(conditions/intro.yo) sect(The `GNU General Public License' (GPL)) includefile(conditions/gpl.yo) bisonc++-4.13.01/documentation/manual/error/0000755000175000017500000000000012633316117017531 5ustar frankfrankbisonc++-4.13.01/documentation/manual/error/recovery.yo0000644000175000017500000001253312633316117021744 0ustar frankfrankB() implements a simple error recovery mechanism. When the tt(lookup()) function cannot find an action for the current token in the current state it throws an tt(UNEXPECTED_TOKEN__) exception. This exception is caught by the parsing function, calling the tt(errorRecovery()) member function. By default, this member function terminates the parsing process. The non-default recovery procedure is available once an tt(error) token is used in a production rule. When the parsing process throws bf(UNEXPECTED_TOKEN__) the recovery procedure is started (i.e., it is started whenever a syntactical error is encountered or tt(ERROR()) is called). The recovery procedure consists of itemization( it() looking for the first state on the state-stack having an error-production, followed by: it() handling all state transitions that are possible without retrieving a terminal token. it() then, in the state requiring a terminal token and starting with the initial unexpected token (3) all subsequent terminal tokens are ignored until a token is retrieved which is a continuation token in that state. ) If the error recovery procedure fails (i.e., if no acceptable token is ever encountered) error recovery falls back to the default recovery mode (i.e., the parsing process is terminated). Not all syntactic errors are always reported: the option link(--required-tokens)(REQUIRED) can be used to specify the minimum number of tokens that must have been successfully processed before another syntactic error is reported (and counted). The option link(--error-verbose)(ERRORVERBOSE) may be specified to obtain the contents of the state stack when a syntactic error is reported. The example grammar may be provided with an tt(error) production rule: verb( %token NR %left '+' %% start: start expr | // empty ; expr: error | NR | expr '+' expr ; ) The resulting grammar has one additional state (handling the error production) and one state in which the tt(ERR_ITEM) flag has been set. When and error is encountered, this state obtains tokens until a token having a valid continuation was received, after which normal processing continues. The following output from the tt(parse()) function, generated by b() using the tt(--debug) option illustrates error recovery for the above grammar, entering the input verb( a 3 + a ) The program defining the parser and calling the parsing member was: verbinclude(../algorithm/example/demo.cc) For this example the following implementation of the tt(lex()) member was used: verb( int Parser::lex() { std::string word; std::cin >> word; if (std::cin.eof()) return 0; if (isdigit(word[0])) return NR; return word[0]; } ) subsubsect(Error recovery --debug output) verb( parse(): Parsing starts push(state 0) == lookup(0, `_UNDETERMINED_'): default reduction by rule 2 executeAction(): of rule 2 ... ... action of rule 2 completed pop(0) from stack having size 1 pop(): next state: 0, token: `start' reduce(): by rule 2 to N-terminal `start' == lookup(0, `start'): shift 1 (`start' processed) push(state 1) == a Syntax error nextToken(): using `a' (97) lookup(1, `a' (97)): Not found. Start error recovery. errorRecovery(): 1 error(s) so far. State = 1 errorRecovery(): state 1 is an ERROR state lookup(1, `_error_'): shift 3 (`_error_' processed) push(state 3) lookup(3, `a' (97)): default reduction by rule 3 pop(1) from stack having size 3 pop(): next state: 1, token: `expr' reduce(): by rule 3 to N-terminal `expr' errorRecovery() REDUCE by rule 3, token = `expr' lookup(1, `expr'): shift 2 (`expr' processed) push(state 2) errorRecovery() SHIFT state 2, continue with `a' (97) lookup(2, `a' (97)): default reduction by rule 1 pop(2) from stack having size 3 pop(): next state: 0, token: `start' reduce(): by rule 1 to N-terminal `start' errorRecovery() REDUCE by rule 1, token = `start' lookup(0, `start'): shift 1 (`start' processed) push(state 1) errorRecovery() SHIFT state 1, continue with `a' (97) lookup(1, `a' (97)): Not found. Continue error recovery. 3+a nextToken(): using `NR' lookup(1, `NR'): shift 4 (`NR' processed) push(state 4) errorRecovery() SHIFT state 4, continue with `_UNDETERMINED_' errorRecovery() COMPLETED: next state 4, no token yet == lookup(4, `_UNDETERMINED_'): default reduction by rule 4 executeAction(): of rule 4 ... ... action of rule 4 completed pop(1) from stack having size 3 pop(): next state: 1, token: `expr' reduce(): by rule 4 to N-terminal `expr' == lookup(1, `expr'): shift 2 (`expr' processed) push(state 2) == [input terminated here] nextToken(): using `_EOF_' lookup(2, `_EOF_'): default reduction by rule 1 executeAction(): of rule 1 ... ... action of rule 1 completed pop(2) from stack having size 3 pop(): next state: 0, token: `start' reduce(): by rule 1 to N-terminal `start' == lookup(0, `start'): shift 1 (`start' processed) push(state 1) == lookup(1, `_EOF_'): ACCEPT ACCEPT(): Parsing successful parse(): returns 0 ) bisonc++-4.13.01/documentation/manual/error/syntactical.yo0000644000175000017500000001135412633316117022424 0ustar frankfrank In a simple interactive command parser where each input is one line, it may be sufficient to allow tt(parse()) to return tt(PARSE_ABORT) on error and have the caller ignore the rest of the input line when that happens (and then call tt(parse()) again). But this is inadequate for a compiler, because it forgets all the syntactic context leading up to the error. A syntactic error deep within a function in the compiler input should not cause the compiler to treat the following line like the beginning of a source file. It is possible to specify how to recover from a syntactic error by writing rules recognizing the special token tt(error). This is a terminal symbol that is always defined (it must em(not) be declared) and is reserved for error handling. The b() parser generates an tt(error) token whenever a syntactic error is detected; if a rule was provided recognizing this token in the current context, the parse can continue. For example: verb( statements: // empty | statements '\n' | statements expression '\n' | statements error '\n' ) The fourth rule in this example says that an error followed by a newline makes a valid addition to any tt(statements). What happens if a syntactic error occurs in the middle of an tt(expression)? The error recovery rule, interpreted strictly, applies to the precise sequence of a tt(statements), an error and a newline. If an error occurs in the middle of an tt(expression), there will probably be some additional tokens and subexpressions on the parser's stack after the last tt(statements), and there will be tokens waiting to be read before the next newline. So the rule is not applicable in the ordinary way. b(), however, can force the situation to fit the rule, by em(discarding) part of the semantic context and part of the input. When a (syntactic) error occurs the parsing algorithm tries to recover from the error in the following way: First it discards states from the stack until it encounters a state in which the tt(error) token is acceptable (meaning that the subexpressions already parsed are discarded, back to the last complete tt(statements)). At this point the error token is shifted. Then, if the available look-ahead token is not acceptable to be shifted next, the parser continues to read tokens and to discard them until it finds a token which em(is) acceptable. I.e., a token which em(can) follow an tt(error) token in the current state. In this example, b() reads and discards input until the next newline was read so that the fourth rule can apply. The choice of error rules in the grammar is a choice of strategies for error recovery. A simple and useful strategy is simply to skip the rest of the current input line or current statement if an error is detected: verb( statement: error ';' // on error, skip until ';' is read ) Another useful recovery strategy is to recover to the matching close-delimiter of an opening-delimiter that has already been parsed. Otherwise the close-delimiter probably appears to be unmatched, generating another, spurious error message: verb( primary: '(' expression ')' | '(' error ')' | ... ; ) Error recovery strategies are necessarily guesses. When they guess wrong, one syntactic error often leads to another. In the above example, the error recovery rule guesses that an error is caused by bad input within one statement. Suppose that instead a spurious semicolon is inserted in the middle of a valid statement. After the error recovery rule recovers from the first error, another syntactic error will immediately be found, since the text following the spurious semicolon is also an invalid statement. To prevent an outpouring of error messages, the parser may be configured in such a way that no error message are generated for another syntactic error that happens shortly after the first. E.g., only after three consecutive input tokens have been successfully shifted error messages may again be generated. This configuration is currently not available in b()'s parsers. Note that rules using the tt(error) token may have actions, just as any other rules can. The token causing an error is re-analyzed immediately when an error occurs. If this is unacceptable, then the member function tt(clearin()) may be called to skip this token. The function can be called by any member function of the Parser class. For example, suppose that on a parse error, an error handling routine is called that advances the input stream to some point where parsing should once again commence. The next symbol returned by the lexical scanner is probably correct. The previous token ought to be discarded using tt(clearin()). bisonc++-4.13.01/documentation/manual/error/semantical.yo0000644000175000017500000000503112633316117022221 0ustar frankfrankSemantical error recovery once again requires judgment on the part of the grammar-writer. For example, an assignment expression may be syntactically defined as verb( expr '=' expr ) The left-hand side must be a so-called em(lvalue). An em(lvalue) is simply an addressable location, like a variable's identifier, a dereferenced pointer expression or some other address-expression. The right-hand side is a so-called em(rvalue): this may be any value: any expression will do. A rule like the above leaves room for many different semantical errors: itemization( it() Since the rule states tt(expr) at its left-hand side, em(any) expression is accepted by the parser. E.g., verb( 3 = 12 ) So, the action associated with this rule should em(check) whether the left-hand side is actually an lvalue. If not, a em(semantical) error should be reported; it() In a typed language (like bf(C++)), not all assignments are possible. E.g., it is not acceptable to assign a bf(std:string) value to a bf(double) variable. When conflicting types are used, a em(semantical) error should be reported; it() In a language requiring variables to be defined or declared before they are used (like bf(C++)) the parser should check whether a variable is actually defined or declared when it is used in an expression. If not, a em(semantical) error should be reported ) A parser that should be able to detect semantic errors normally uses a counter counting the number of semantic errors, e.g., tt(size_t d_nSemanticErrors). It may be possible to test this counter's value once the input has been parsed, calling tt(ABORT()) (see section ref(PRIVMEM)) if the counter isn't zero anymore. When the grammar's start symbol itself has multiple alternatives, it is probably easiest to augment the grammar with an additional rule, becoming the augmented grammar's start symbol which simply calls the former start symbol. For example, if tt(input) was the name of the original start-symbol, augment the grammar as follows to ensure a bf(PARSE_ABORT) return value of the tt(parse()) member when either syntactic or semantical errors were detected: verb( semantic_input: // new start-symbol input { if (d_nSemanticErrors) // return PARSE_ABORT ABORT(); // on semantic errors too. } ) Returning from the parser's tt(parse()) member the number of syntactic and semantical errors could then be printed, whereupon the program might terminate. bisonc++-4.13.01/documentation/manual/error/intro.yo0000644000175000017500000000214412633316117021236 0ustar frankfrank Usually it is not acceptable to have a program terminate on a parse error. For example, a compiler should recover sufficiently to parse the rest of the input file and check it for errors; a calculator should accept another expression. Such errors violate the grammar for which the parser was constructed and are called em(syntactic errors). Other types of errors are called em(semantical errors): here the intended em(meaning) of the language is not observed. For example, a division by too small a numeric constant (e.g., 0) may be detected by the parser em(compile time). In general, what em(can) be detected compile time should not left for the run-time to detect, and so the parser should flag an error when it detects a division by a very small numerical constant. B()'s parsers may detect both syntactic em(and) semantical errors. Syntactical errors are detected automatically, while the parser performs its parsing-job, semantical errors must explicitly be defined when the grammar is constructed. The following sections cover the way B()'s parser may handle syntactic errors and semantical errors, respectively. bisonc++-4.13.01/documentation/manual/grammar/0000755000175000017500000000000012633316117020026 5ustar frankfrankbisonc++-4.13.01/documentation/manual/grammar/polymorphic.yo0000644000175000017500000000233612633316117022750 0ustar frankfrankBic() may define polymorphic semantic values. The approach discussed here is a direct result of a suggestion originally made by Dallas A. Clement in September 2007. All sources of the example discussed in this section can be retrieved from the lurl(poly) directory. One may wonder why a tt(union) is still used by b() as bf(C++) offers inherently superior approaches to combine multiple types in one type. The bf(C++) way to do so is by defining a polymorphic base class and a series of derived classes implementing the various exclusive data types. The tt(union) approach is still supported by b() since it is supported by bf(bison)(1) and bf(bison++); dropping the tt(union) would needlessly impede backward compatibility. The (preferred) alternative to a tt(union), however, is a polymorphic base class. Although it is possible to define your own polymorphic semantic value classes, bic() makes life easy by offering the tt(%polymorphic) directive. The example program (cf. lurl(poly)) implements a polymorphic base class, and derived classes containing either an tt(int) or a tt(std::string) semantic value. These types are asociated with tags (resp. tt(INT) and tt(TEXT)) using the tt(%polymorphic) directive, which is discussed next. bisonc++-4.13.01/documentation/manual/grammar/optdelim.yo0000644000175000017500000000314612633316117022220 0ustar frankfrankAn optional series of elements, separated from each other using delimiters occurs frequently in programming languages. For example, bf(C++) functions have parameter lists which may or may not require arguments. Since a parameter list may be defined empty, an em(empty) alternative is required. However, a simple generalization of the optional series construction (section ref(OPTSERIES)) won't work, since that would imply that the em(first) argument is preceded by a separator, which is clearly not the intention. So, the following construction is em(wrong): verb( opt_parlist: // empty | opt_parlist ',' parameter ) To define an optional series of delimited elements em(two) rules are required: one rule handling the optional part, the other the delimited series of elements. So, the correct definition is as follows: verb( opt_parlist: // empty | parlist ; parlist: parameter | parlist ',' parameter ; ) Again, the above rule pair can be used as a prototype for recognizing an optional series of delimited elements. The generic form of these rules could be formulated as follows: verb( opt_series: // empty | series ; series: element | series delimiter element ) Note that the tt(opt_series) rules neatly distinguishes the no-element case from the case were elements are present. Usually these two cases need to be handled quite differently, and the tt(opt_series) rules empty alternative easily allows us to recognize the no-elements case. bisonc++-4.13.01/documentation/manual/grammar/polymorphictype.yo0000644000175000017500000001135012633316117023646 0ustar frankfrank The tt(%type) directive is used to associate (non-)terminals with semantic value types. Non-terminals may be associated with polymorphic semantic values using tt(%type) directives. E.g., after: verb( %polymorphic INT: int; TEXT: std::string %type expr ) the tt(expr) non-terminal returns tt(int) semantic values. In this case, a rule like: verb( expr: expr '+' expr { $$ = $1 + $3; } ) automatically associates $$, $1 and $3 with tt(int) values. Here $$ is an lvalue (representing the semantic value associated with the tt(expr:) rule), while $1 and $3 represent, because of the tt(%type) specification, tt(int) semantic values which are associated with, resp., the first and second tt(expr) non-terminal in the production rule tt(expr '+' expr). When negative dollar indices (like $-1) are used, pre-defined associations between non-terminals and semantic types are ignored. With positive indices or in combination with the production rule's return value tt($$), however, semantic value types can explicitly be specified using the common `$$' or `$1' syntax. (In this and following examples index number 1 represents any valid positive index; -1 represents any valid negative index). The type-overruling syntax does not allow blanks to be used (so $$ is OK, $< INT >$ isn't). Various combinations of type-associations and type specifications may be encountered: itemization( it() $-1: tt(%type) associations are ignored, and the semantic value type tt(STYPE__) is used instead. A warning is issued unless the tt(%negative-dollar-indices) directive was specified. it() $-1: em(error): tt() specifications are not allowed for negative dollar indices. ) whenhtml( center( table(1)(l)( rowline() row(cell(center(includefile(polytable)))) )) includefile(polytablenotes) rowline() ) whenman( bf(%type and $$ or $1 specifications:) includefile(polytable) includefile(polytablenotes) bf(Member calls) (`$$.', `$1.', `($$)', `($1)', etc.):) When using `$$.' or `$1.' default tags are ignored. A warning is issued that the default tag is ignored. This syntax allows members of the semantic value type (tt(STYPE__)) to be called explicitly. The default tag is only ignored if there are no additional characters (e.g., blanks, closing parentheses) between the dollar-expressions and the member selector operator (e.g., no tags are used with $1.member(), but tags are used with tt(($1).member())). In fact, notations like tt(($$), ($1)), etc. are synonym to using tt($$.get(), $1.get()) The opposite, overriding default tag associations, is accomplished using constructions like $$ and $1. When negative dollar indices are used, the appropriate tag must explicitly be specified. The next example shows how this is realized in the grammar specification file itself: verb( %polymorphic INT: int %type ident %% type: ident arg ; arg: { call($-1.get()); } ; ) In this example tt(call) may define an tt(int) or tt(int &) parameter. It is also possible to delegate specification of the semantic value to the function tt(call) itself, as shown next: verb( %polymorphic INT: int %type ident %% type: ident arg ; arg: { call($-1); } ; ) Here, the function tt(call) could be implemented like this: verb( void call(STYPE__ &st) { st.get() = 5; } ) Semantic values may also directly be associated with terminal tokens. In that case it is the lexical scanner's responsibility to assign a properly typed value to the parser's tt(STYPE__ d_val__) data member. When the lexical scanner receives a pointer to the parser's tt(d_val__) data member (using, e.g., a member tt(setSval(STYPE__ *dval))) IFDEF(manual)((cf. section ref(PRIVDATA)))(), then the lecical scanner must use em(tagged assignment) as shown in the above example to reach the different polymorphic types. The lexical scanner, having defined a tt(Parser::STYPE__ *d_val) data member could then use statements like verb( d_val.get() = stoi(matched()); ) to assign an tt(int) value to the parser's semantic value, which is then immediately available when the lexical scanner's tt(lex) function returns. Note, however that this also adds intelligence about the meaning of a tt(Parser::INT) token to the scanner. It can be argued that this knowledge belongs to the parser, and that the scanner should merely recognize regular expressions and return tokens and their corresponding matched text. bisonc++-4.13.01/documentation/manual/grammar/scanner.yo0000644000175000017500000000054512633316117022034 0ustar frankfrankThe scanner recognizes input patterns, and returns Parser tokens (e.g., Parser::INT) matching the recognized input. It is easily created by bf(flexc++)(1) processing the following simple specification file. verbinclude(poly/scanner/lexer) The reader may refer to bf(flexc++)(1) documentation for details about bf(flexc++)(1) specification files. bisonc++-4.13.01/documentation/manual/grammar/symbols.yo0000644000175000017500000001055112633316117022071 0ustar frankfrankem(Symbols) in b() grammars represent the grammatical classifications of the language. A em(terminal symbol) (also known as a em(token type)) represents a class of syntacticly equivalent tokens. You use the symbol in grammar rules to mean that a token in that class is allowed. The symbol is represented in the B() parser by a numeric code, and the parser's tt(lex()) member function returns a token type code to indicate what kind of token has been read. You don't need to know what the code value is; you can use the symbol to stand for it. A em(nonterminal symbol) stands for a class of syntactically equivalent groupings. The symbol name is used in writing grammar rules. By convention, it should be all lower case. Symbol names can contain letters, digits (not at the beginning), and underscores. B() currently does not support periods in symbol names (Users familiar with Bison may observe that Bison em(does) support periods in symbol names, but the Bison user guide remarks that `Periods make sense only in nonterminals'. Even so, it appears that periods in symbols are hardly ever used). There are two ways to write terminal symbols in the grammar: itemization( it() A em(named token type) is written with an identifier, like an identifier in bf(C++). By convention, it should be all upper case. Each such name must be defined with a b() directive such as tt(%token). See section ref(TOKTYPENAMES). it() A tt(character token type) (or tt(literal character token)) is written in the grammar using the same syntax used in bf(C++) for character constants; for example, 'tt(+)' is a character token type. A character token type doesn't need to be declared unless you need to specify its semantic value data type (see section ref(SEMANTICTYPES)), associativity, or precedence (see section ref(PRECEDENCE)). ) By convention, a character token type is used only to represent a token that consists of that particular character. Thus, the token type 'tt(+)' is used to represent the character `tt(+)' as a token. Nothing enforces this convention, but if you depart from it, your program will likely confuse other readers. All the usual escape sequences used in character literals in bf(C++) can be used in b() as well, but you must not use the tt(0) character as a character literal because its ASCII code, zero, is the code tt(lex()) must return for end-of-input (see section ref(LEX)). If your program em(must) be able to return 0-byte characters, define a special token (e.g., tt(ZERO_BYTE)) and return that token instead. Note that em(literal string tokens), formally supported in Bison, is em(not) supported by b(). Again, such tokens are hardly ever encountered, and the dominant lexical scanner generators (like bf(flex)(1)) do not support them. Common practice is to define a symbolic name for a literal string token. So, a token like tt(EQ) may be defined in the grammar file, with the lexical scanner returning tt(EQ) when it matches tt(==). How you choose to write a terminal symbol has no effect on its grammatical meaning. That depends only on where it appears in rules and on when the parser function returns that symbol. The value returned by the tt(lex()) member is always one of the terminal symbols (or 0 for end-of-input). Whichever way you write the token type in the grammar rules, you write it the same way in the definition of yylex. The numeric code for a character token type is simply the ASCII code for the character, so tt(lex()) can use the identical character constant to generate the requisite code. Each named token type becomes a bf(C++) enumeration value in the parser base-class header file, so tt(lex()) can use the corresponding enumeration identifiers. When using an externally (to the parser) defined lexical scanner, the lexical scanner should include the parser's base class header file, returning the required enumeration identifiers as defined in the parser class. So, if (%token NUM) is defined in the parser class tt(Parser), then the externally defined lexical scanner may return tt(Parser::NUM). The symbol `tt(error)' is a em(terminal) symbol reserved for error recovery (see chapter ref(RECOVERY)). The tt(error) symbol should not be used for any other purpose. In particular, the parser's member function tt(lex()) should never return this value. Several other identifiers should not be used as terminal symbols. See section ref(IMPROPER) for a description. bisonc++-4.13.01/documentation/manual/grammar/optseries.yo0000644000175000017500000000174212633316117022420 0ustar frankfrankAn em(optional) series of elements also use left-recursion, but the single element alternative remains empty. For example, in bf(C++) a compound statement may contain statements or declarations, but it may also be empty: verb( opt_statements: // empty | opt_statements statements ) The above rule can be used as a prototype for recognizing a series of elements: the generic form of this rule could be formulated as follows: verb( opt_series: // empty | opt_series unit ) Note that the empty element is recognized em(first), even though it is empty, whereafter the left-recursive alternative may be recognized repeatedly. In practice this means that an em(action block) may be defined at the empty alternative, which is then executed prior to the left-recursive alternative. Such an action block could be used to perform initializations necessary for the proper handling of the left-recursive alternative. bisonc++-4.13.01/documentation/manual/grammar/polytablenotes.yo0000644000175000017500000000053612633316117023447 0ustar frankfrankitemization( itx() auto-tag: $$ and $1 represent, respectively, tt($$.get()) and tt($1.get()); itx() tag-error: em(error:) tag undefined; itx() tag-override: if tt(id) is a defined tag, then $$ and $1 represent the tag's type. Otherwise: em(error) (using undefined tag tt(id)). ) bisonc++-4.13.01/documentation/manual/grammar/semantics.yo0000644000175000017500000000226112633316117022366 0ustar frankfrankThe grammar rules for a language determine only the syntax. The semantics are determined by the semantic values associated with various tokens and groupings, and by the actions taken when various groupings are recognized. For example, the calculator calculates properly because the value associated with each expression is the proper number; it adds properly because the action for the grouping `x + y' is to add the numbers associated with x and y. In this section defining the semantics of a language is addressed, covering the following topics: itemization( it() link(Specifying one data type for all semantic values)(SEMANTICTYPES); it() link(Specifying several alternative data types)(MORETYPES); it() link(Using Polymorphism to specify several data types)(POLYMORPHIC); it() link(Specifying Actions)(ACTIONS) (an action is the semantic definition of a grammar rule); it() link(Specifying data types for actions to operate on)(ACTIONTYPES); it() link(Specifying when and how to put actions in the middle of a rule)(MIDACTIONS) (most actions go at the end of a rule. In some situations it may be desirable to put an action in in the middle of a rule). ) bisonc++-4.13.01/documentation/manual/grammar/polytable.yo0000644000175000017500000000321012633316117022366 0ustar frankfranktable(5)(cccll)( whenman(rowline()) row(setmanalign(lssss)\ cells(5)($$ or $1 specifications)) rowline() \ tr(%type) ($) (action:) rowline() \ tr(absent) (no ) (STYPE__ is used) columnline(3)(5) tr()($) (tag-override) columnline(3)(5) tr()($<>) (STYPE__ is used) columnline(3)(5) tr()($) (STYPE__ is used) rowline() \ tr(STYPE__) (no ) (STYPE__ is used) columnline(3)(5) tr()($) (tag-override) columnline(3)(5) tr()($<>) (STYPE__ is used) columnline(3)(5) tr()($) (STYPE__ is used) rowline() \ tr((existing) tag) (no ) (auto-tag) columnline(3)(5) tr()($) (tag-override) columnline(3)(5) tr()($<>) (STYPE__ is used) columnline(3)(5) tr()($) (STYPE__ is used) rowline() \ tr((undefined) tag) (no ) (tag-error) columnline(3)(5) tr()($) (tag-override) columnline(3)(5) tr()($<>) (STYPE__ is used) columnline(3)(5) tr()($) (STYPE__ is used) rowline() ) bisonc++-4.13.01/documentation/manual/grammar/multiple.yo0000644000175000017500000000072612633316117022237 0ustar frankfrankMost programs that use b() parse only one language and therefore contain only one b() parser. But what if you want to parse more than one language with the same program? Since b() constructs em(class) rather than a em(parsing function), this problem can easily be solved: simply define your second (third, fourth, ...) parser class, each having its own unique class-name, using the tt(%class-name) directive, and construct parser objects of each of the defined classes. bisonc++-4.13.01/documentation/manual/grammar/delimseries.yo0000644000175000017500000000174512633316117022713 0ustar frankfrankA series of elements which are separated from each other using some delimiter again normally uses left-recursion. For example, a bf(C++) variable definition list consists of one or more identifiers, separated by comma's. If there is only one identifier no comma is used. Here is the rule defining a list using separators: verb( variables: IDENTIFIER | variables ',' IDENTIFIER ) The above rule can be used as a prototype for recognizing a series of delimited elements. The generic form of this rule could be formulated as follows: verb( series: unit | series delimiter unit ) Note that the single element is em(first) recognized, whereafter the left-recursive alternative may be recognized repeatedly. In fact, this rule is not really different from the standard rule for a series, which does not hold true for the rule to recognize an em(optional) series of delimited elements, covered in the next section. bisonc++-4.13.01/documentation/manual/grammar/outline.yo0000644000175000017500000000415112633316117022057 0ustar frankfrankThe input file for the b() utility is a b() grammar file. Different from Bison++ and Bison grammar files, b() grammar file consist of only two sections. The general form of a b() grammar file is as follows: verb( Bisonc++ directives %% Grammar rules ) Readers familiar with Bison may note that there is no em(C declaration section) and no section to define em(Additional C code). With b() these sections are superfluous since, due to the fact that a b() parser is a class, all additional code required for the parser's implementation can be incorporated into the parser class itself. Also, bf(C++) classes normally only require declarations that can be defined in the classes' header files, so also the `additional C code' section could be omitted from the B() grammar file. The `%%' is a punctuation that appears in every b() grammar file to separate the two sections. The b() directives section is used to declare the names of the terminal and nonterminal symbols, and may also describe operator precedence and the data types of semantic values of various symbols. Furthermore, this section is also used to specify b() directives. These b() directives are used to define, e.g., the name of the generated parser class and a namespace in which the parser class is defined. All b() directives are covered in section ref(DIRECTIVES). The grammar rules define how to construct em(nonterminal symbols) from their parts. The grammar rules section contains one or more b() grammar rules, and nothing else. See section ref(RULES), covering the syntax of grammar rules. There must always be at least one grammar rule, and the first `%%' (which precedes the grammar rules) may never be omitted even if it is the first thing in the file. B()'s grammar file may be split into several files. Each file may be given a suggestive name. This allows quick identification of where a particular section or rule is found, and improves readability of the designed grammar. The link(%include)(INCLUDE)-directive (see section ref(INCLUDE)) can be used to include a partial grammar specification file into another specification file. bisonc++-4.13.01/documentation/manual/grammar/essence/0000755000175000017500000000000012633316117021453 5ustar frankfrankbisonc++-4.13.01/documentation/manual/grammar/essence/demo.cc0000644000175000017500000000405312633316117022710 0ustar frankfrank#include using namespace std; struct SType { int x = 123; template inline SType &operator=(Tp_ &&rhs); }; template struct Assign; template struct Assign { static SType &assign(SType *lhs, Tp_ &&tp); }; template struct Assign { static SType &assign(SType *lhs, Tp_ const &tp); }; template <> struct Assign { static SType &assign(SType *lhs, SType const &tp); }; template inline SType &Assign::assign(SType *lhs, Tp_ &&tp) { cout << "move assignment of some rhs type\n"; lhs->x = tp; return *lhs; } template inline SType &Assign::assign(SType *lhs, Tp_ const &tp) { cout << "std assign of some rhs type\n"; lhs->x = tp; return *lhs; } inline SType &Assign::assign(SType *lhs, SType const &tp) { cout << "Assignment of Stype to Stype through copy-assignment\n"; lhs->x = tp.x; return *lhs; } template inline SType &SType::operator=(Tp_ &&rhs) { return Assign< // the is_rvalue_reference is needed to set the first template // param. true, resuling in move assignment, using the first // overload: // Assign::assign(SType *lhs, Tp_ &&tp) std::is_rvalue_reference::value, // the rm ref. is needed to distinguish // Assign::assign(SType *lhs, Tp_ const &tp) and // Assign::assign(SType *lhs, SType const &tp) typename std::remove_reference::type >::assign(this, std::forward(rhs)); } // ==================================================================== double fact() { return 3.14; } int main() { SType st; int x = 10; st = st; cout << st.x << '\n'; st = x; cout << st.x << '\n'; st = int(5); cout << st.x << '\n'; st = fact(); cout << st.x << '\n'; } bisonc++-4.13.01/documentation/manual/grammar/midrule.yo0000644000175000017500000001471112633316117022044 0ustar frankfrankOccasionally it is useful to put an action in the middle of a rule. These actions are written just like usual end-of-rule actions, but they are executed before the parser recognizes the components that follow them. A mid-rule action may refer to the components preceding it using tt($n), but it may not (cannot) refer to subsequent components because it is executed before they are parsed. The mid-rule action itself counts as one of the components of the rule. This makes a difference when there is another action later in the same rule (and usually there is another at the end): you have to count the actions along with the symbols when working out which number tt(n) to use in tt($n). The mid-rule action can also have a semantic value. The action can set its value with an assignment to tt($$), and actions later in the rule can refer to the value using tt($n). Since there is no symbol to name the action, there is no way to declare a data type for the value in advance, so you must use the `tt($<...>)' construct to specify a data type each time you refer to this value. There is no way to set the value of the entire rule with a mid-rule action, because assignments to tt($$) do not have that effect. The only way to set the value for the entire rule is with an ordinary action at the end of the rule. Here is an example from a hypothetical compiler, handling a tt(let) statement that looks like ``tt(let (variable) statement)' and serves to create a variable named tt(variable) temporarily for the duration of statement. To parse this construct, we must put tt(variable) into the symbol table while statement is parsed, then remove it afterward. Here is how it is done: verb( stmt: LET '(' var ')' { $$ = pushSymtab(); temporaryVariable($3); } stmt { $$ = $6; popSymtab($5); } ) As soon as `tt(let (variable))' has been recognized, the first action is executed. It saves a copy of the current symbol table as its semantic value, using alternative tt(u_context) in the data-type union. Then it uses tt(temporaryVariable()) to add the new variable (using, e.g., a name that cannot normally be used in the parsed language) to the current symbol table. Once the first action is finished, the embedded statement (tt(stmt)) can be parsed. Note that the mid-rule action is component number 5, so `tt(stmt)' is component number 6. Once the embedded statement is parsed, its semantic value becomes the value of the entire tt(let)-statement. Then the semantic value from the earlier action is used to restore the former symbol table. This removes the temporary tt(let)-variable from the list so that it won't appear to exist while the rest of the program is parsed. Taking action before a rule is completely recognized often leads to conflicts since the parser must commit to a parse in order to execute the action. For example, the following two rules, without mid-rule actions, can coexist in a working parser because the parser can shift the open-brace token and look at what follows before deciding whether there is a declaration or not: verb( compound: '{' declarations statements '}' | '{' statements '}' ; ) But when we add a mid-rule action as follows, the rules become nonfunctional: verb( compound: { prepareForLocalVariables(); } '{' declarations statements '}' | '{' statements '}' ; ) Now the parser is forced to decide whether to execute the mid-rule action when it has read no farther than the open-brace. In other words, it must commit to using one rule or the other, without sufficient information to do it correctly. (The open-brace token is what is called the look-ahead token at this time, since the parser is still deciding what to do about it. See section ref(LOOKAHEAD). You might think that the problem can be solved by putting identical actions into the two rules, like this: verb( { prepareForLocalVariables(); } '{' declarations statements '}' | { prepareForLocalVariables(); } '{' statements '}' ; ) But this does not help, because b() em(never) parses the contents of actions, and so it does em(not) realize that the two actions are identical. If the grammar is such that a declaration can be distinguished from a statement by the first token (which is true in bf(C), but em(not) in bf(C++), which allows statements and declarations to be mixed)), then one solution is to put the action after the open-brace, like this: verb( compound: '{' { prepareForLocalVariables(); } declarations statements '}' | '{' statements '}' ; ) Now the next token following a recognized tt('{') token would be either the first tt(declarations) token or the first tt(statements) token, which would in any case tell b() which rule to use, thus solving the problem. Another (much used) solution is to bury the action inside a support non-terminal symbol which recognizes the first block-open brace and performs the required preparations: verb( openblock: '{' { prepareForLocalVariables(); } ; compound: openblock declarations statements '}' | openblock statements '}' ; ) Now b() can execute the action in the rule for subroutine without deciding which rule for compound it eventually uses. Note that the action is now at the end of its rule. Any mid-rule action can be converted to an end-of-rule action in this way, and this is what b() actually does to implement mid-rule actions. By the way, note that in a language like bf(C++) the above construction is obsolete anyway, since bf(C++) allows mid-block variable- and object declarations. In bf(C++) a compound statement could be defined, e.g., as follows: verb( stmnt_or_decl: declarations | pure_stmnt // among which: compound_stmnt ; statements: // empty | statements stmnt_or_decl ; compound_stmnt: open_block statements '}' ; ) Here, the tt(compound_stmnt) would begin with the necessary preparations for local declarations, which would then have been completed by the time they would really be needed by tt(declarations). bisonc++-4.13.01/documentation/manual/grammar/recursive.yo0000644000175000017500000000316412633316117022412 0ustar frankfrankA rule is called em(recursive) when its em(result) nonterminal appears also on its right hand side. Nearly all b() grammars need to use recursion, because that is the only way to define a sequence of any number of somethings. Consider this recursive definition of a comma-separated sequence of one or more expressions: verb( expseq1: expseq1 ',' exp | exp ; ) Since the recursive use of expseq1 is the leftmost symbol in the right hand side, we call this em(left recursion). By contrast, here the same construct is defined using em(right recursion): verb( expseq1: exp ',' expseq1 | exp ; ) Any kind of sequence can be defined using either left recursion or right recursion, but you should always use left recursion, because it can parse a sequence of any number of elements with bounded stack space. Right recursion uses up space on the b() stack in proportion to the number of elements in the sequence, because all the elements must be shifted onto the stack before the rule can be applied even once. See chapter ref(ALGORITHM) for further explanation of this. em(Indirect) or em(mutual) recursion occurs when the result of the rule does not appear directly on its right hand side, but does appear in rules for other nonterminals which do appear on its right hand side. For example: verb( expr: primary '+' primary | primary ; primary: constant | '(' expr ')' ; ) defines two mutually-recursive nonterminals, since each refers to the other. bisonc++-4.13.01/documentation/manual/grammar/polymorphicdirective.yo0000644000175000017500000000310412633316117024641 0ustar frankfrank When encountering the tt(%polymorphic) directive bic() generates a parser that uses polymorphic semantic values. Each semantic value specification consists of a em(tag), which is a bf(C++) identifier, and a bf(C++) type definition. Tags and type definitions are separated by colons, and multiple semantic values specifications are separated by semicolons. The semicolon trailing the final semantic value specification is optional. A grammar specification file may contain only one tt(%polymorphic) directive, and the tt(%polymorphic, %stype) and tt(%union) directives are mutually exclusive. Here is an example, defining three semantic values types: an tt(int), a tt(std::string) and a tt(std::vector): verb( %polymorphic INT: int; STRING: std::string; VECT: std::vector ) The identifier to the left of the colon is called the em(type-identifier), and the type definition to the right of the colon is called the em(type-definition). Types specified at the tt(%polymorphic) type-definitions must be built-in types or class-type declarations. Since bic() version 4.12.00 the types no longer have to offer offer default constructors, but if no default constructor is available then the option tt(--no-default-action-return) is required. When polymorphic type-names refer to types not yet declared by the parser's base class header, then these types must be declared in a header file whose location is specified through the tt(%baseclass-preinclude) directive as these types are referred to in the generated tt(parserbase.h) header file. bisonc++-4.13.01/documentation/manual/grammar/syntax.yo0000644000175000017500000000463712633316117021737 0ustar frankfrankA b() grammar rule has the following general form: verb( result: components ... ; ) where em(result) is the nonterminal symbol that this rule describes and em(components) are various terminal and nonterminal symbols that are put together by this rule (see section ref(SYMBOLS)). With respect to the way rules are defined, note the following: itemization( it() The construction: verb( exp: exp '+' exp ; ) means that two groupings of type tt(exp), with a `+' token in between, can be combined into a larger grouping of type tt(exp). it() Whitespace in rules is significant only to separate symbols. You can add extra whitespace as you wish. it() Scattered among the components can be em(actions) that determine the semantics of the rule. An action looks like this: verb( { C++ statements } ) Usually there is only one action and it follows the components. See section ref(ACTIONS). it() Multiple rules for the same result can be written separately or can be joined with the vertical-bar character `|' as follows: verb( result: rule1-components ... | rule2-components... ... ; ) They are still considered distinct rules even when joined in this way. it() Alternatively, multiple rules of the same nonterminal can be defined. E.g., the previous definition of tt(result:) could also have been defined as: verb( result: rule1-components ... ; result: rule2-components... ... ; ) However, this is a potentially dangerous practice, since one of the two tt(result) rules could also have used a misspelled rule-name (e.g., the second tt(result)) should have been tt(results). Therefore, b() generates a warning if the same nonterminal is used repeatedly when defining production rules. it() If em(components) in a rule is em(empty), it means that em(result) can match the empty string. Such a alternative is called an em(empty production rule). For example, here is how to define a comma-separated sequence of zero or more tt(exp) groupings: verb( expseq: expseq1 | // empty ; expseq1: expseq1 ',' exp | exp ; ) Convention calls for a comment `tt(// empty)' in each empty production rule. ) bisonc++-4.13.01/documentation/manual/grammar/intro.yo0000644000175000017500000000130712633316117021533 0ustar frankfrankB() takes as input a context-free grammar specification and produces a bf(C++) class offering various predefined members, among which the member tt(parse()), that recognizes correct instances of the grammar. In this chapter the organization and specification of such a grammar file is discussed in detail. Having read this chapter you should be able to define a grammar for which B() can generate a class, containing a member that will recognize correctly formulated (in terms of your grammar) input, using all the features and facilities offered by b() to specify a grammar. In principle this grammar will be in the class of bf(LALR(1)) grammars (see, e.g., em(Aho, Sethi & Ullman), 2003 (Addison-Wesley)). bisonc++-4.13.01/documentation/manual/grammar/datatypes.yo0000644000175000017500000000224712633316117022402 0ustar frankfrankIn a simple program it may be sufficient to use the same data type for the semantic values of all language constructs. This was true in the tt(rpn) and tt(infix) calculator examples (see, e.g., sections ref(RPN) and ref(CALC)). B()'s default is to use type tt(int) for all semantic values. To specify some other type, the directive tt(%stype) must be used, like this: verb( %stype double ) Any text following tt(%stype) up to the end of the line, up to the first of a series of trailing blanks or tabs or up to a comment-token (tt(//) or tt(/*)) becomes part of the type definition. Be sure em(not) to end a tt(%stype) definition in a semicolon. This directive must go in the directives section of the grammar file (see section ref(OUTLINE)). As a result of this, the parser class defines a em(private type) tt(STYPE__) as tt(double): Semantic values of language constructs always have the type tt(STYPE__), and (assuming the parser class is named tt(Parser)) an internally used data member tt(d_val) that could be used by the lexical scanner to associate a semantic value with a returned token is defined as: verb( Parser::STYPE__ d_val; ) bisonc++-4.13.01/documentation/manual/grammar/gramcons.yo0000644000175000017500000000067312633316117022216 0ustar frankfrankIn the following sections several basic grammatical constructions are presented in their prototypical and generic forms. When these basic constructions are used to construct a grammar, the resulting grammar is usually accepted by b(). Moreover, these basic constructions are frequently encountered in programming languages. When designing your own grammar, try to stick as closely as possible to the following basic grammatical constructions. bisonc++-4.13.01/documentation/manual/grammar/parser.yo0000644000175000017500000000566512633316117021707 0ustar frankfrankIn this section a parser is developed using polymorphic semantic values. Its tt(%polymorphic) directive looks like this: verb( %polymorphic INT: int; TEXT: std::string; ) Furthermore, the grammar declares tokens tt(INT) and tt(IDENTIFIER), and pre-associates the tt(TEXT) tag with the tt(identifier) non-terminal, associates the tt(INT) tag with the tt(int) non-terminal, and associates tt(STYPE__), the generic polymorphic value with the non=terminal tt(combi): verb( %type identifier %type int %type combi ) For this example a simple grammar was developed, expecting an optional number of input lines, formatted according to the following tt(rule) production rules: verb( rule: identifier '(' identifier ')' '\n' | identifier '=' int '\n' | combi '\n' ; ) The rules for tt(identifier) and tt(int) assign, respectively, text and an tt(int) value to the parser's semantic value stack: verb( identifier: IDENTIFIER { $$ = d_scanner.matched(); } ; int: INT { $$ = d_scanner.intValue(); } ; ) These simple assignments can be used as tt(int) is pre-associated with the tt(INT) tag and tt(identifier) is asociated with the tt(TEXT) tag. As the tt(combi) rule is not associated with a specific semantic value, its semantic value could be either tt(INT) or tt(TEXT). Irrespective of what is actually returned by tt(combi), its semantic value can be passed on to a function (tt(process(STYPE__ const &))), responsible for the semantic value's further processing. Here are the definition of the tt(combi) non-terminal and action blocks for the tt(rule) non-terminal: verb( combi: int | identifier ; rule: identifier '(' identifier ')' '\n' { cout << $1 << " " << $3 << '\n'; } | identifier '=' int '\n' { cout << $1 << " " << $3 << '\n'; } | combi '\n' { process($1); } ; ) Since tt(identifier) has been associated with tt(TEXT) and tt(int) with tt(INT), the $-references to these elements in the production rules already return, respectively, a tt(std::string const &) and an tt(int). For tt(combi) the situation is slightly more complex, as tt(combi) could either return an tt(int) (via its tt(int) production rule) or a tt(std::string const &) (via its tt(identifier) production rule). Fortunately, tt(process) can find out by inspecting the semantic value's tt(Tag__): verb( void Parser::process(STYPE__ &semVal) const { if (semVal.tag() == Tag__::INT) cout << "Saw an int-value: " << semVal.get() << '\n'; else cout << "Saw text: " << semVal.get() << '\n'; } ) bisonc++-4.13.01/documentation/manual/grammar/code.yo0000644000175000017500000002234012633316117021312 0ustar frankfrankThe parser using polymorphic semantic values adds several classes to the generated files. The majority of these are class templates, defined in tt(parserbase.h); some of the additionally implemented code is added to the tt(parse.cc) source file. To minimize namespace pollution most of the additional code is contained in a namespace of its own: tt(Meta__). If the tt(%namespace) directive was used then tt(Meta__) is nested under the namespace declared by that directive. The name tt(Meta__) provides a hint to the fact that much of the code implementing polymorphic semantic values uses template meta programming. bf(The enumeration 'enum class Tag__') One notable exception to the above is the enumeration tt(Tag__). To simplify its use it is declared outside of tt(Meta__) (but inside the tt(%namespace) namespace, if provided). Its identifiers are the tags declared by the tt(%polymorphic) directive. This is a strongly typed enumeration. The tt(%weak-tags) directive can be used to declare a pre C++-11 standard `tt(enum Tag__)'. bf(The namespace Meta__) Below, tt(DataType) refers to the semantic value's data type that is associated with a tt(Tag__) identifier. Furthermore, tt(ReturnType) equals tt(DataType) if tt(DataType) is a built-in type (like tt(int, double,) etc.), in other cases (for, e.g., class-type data types) it is equal to tt(DataType const &) . The important elements of the namespace tt(Meta__) are: itemization( it() First, the polymorphic semantic value's base class tt(Base).nl() Its public interface offers the following members:nl() itemization( itt(Tag__ tag() const:) returns the semantic value type's tag. itt(ReturnType get() const:) accesses the (non-modifiable) data element of the type matching the tag. the data element of the type matching the tag (also see below at the description of the class tt(SType)). itt(DataType &get() const:) provides access to the (modifiable) data element of the type matching the tag. ) it() Second, the semantic value classes tt(Semantic: public Base).nl() The various tt(Semantic) classes are derived for each of the tag identifiers tt(ID) that are declared at the tt(%polymorphic) directive. These tt(Semantic) classes contain a tt(mutable DataType) data member. Their public interfaces offer the following members: itemization( it() Constructors accepting the same sets of arguments as supported by its tt(DataType) data member. The arguments passed to the tt(Semantic) constructor are perfectly forwarded to the tt(DataType) member. E.g., if tt(DataType) supports a default constructor, a copy constructor and/or a move constructor then the matching tt(Semantic) class also offers a default constructor, a constructor expecting a tt(DataType) object, and a constructor expecting an anonymous tt(DataType) object. it() An tt(operator ReturnType() const) conversion operator; it() An tt(operator DataType &()t) conversion operator. ) tt(Semantic) objects are usually not explicitly used. Rather, their use is implied by the actual semantic value class tt(SType) and by several support functions (see below). it() Third, the semantic value class tt(SType: public std::shared_ptr) provides access to the various semantic value types. The tt(SType__) class becomes the parser's tt(STYPE__) type, and explicitly accessing tt(Semantic) should never be necessary.nl() tt(SType)'s public interface offers the following members: itemization( it() Constructors: default (available if the tt(Semantic) type class offers a default constructor), copy and move constructors.nl() Since the parser's semantic value and semantic value stack is completely controlled by the parser, and since the actual semantic data values are unknown at construction time of the semantic value (tt(d_val__)) and of the semantic value stack, no constructors expecting tt(DataType) values are provided. it() Assignment operators.nl() The standard overloaded assignment operator (copy and move variants) as well as copy and move assignment operators for the types declared at the tt(%polymorphic) directive are provided. Assigning a value using tt(operator=) allocates a tt(Semantic) object for the tag matching the right-hand side's data type, and resets the tt(SType)'s tt(shared_pointer) to this new tt(Semantic) object.nl() Be aware that this may break the default association of the semantic value as declared by a tt(%type) directive. When breaking the default association make sure that explicit tags are used (as in tt($)), overriding the default association with the currently active association. Usually, however, the assignment is of course not used to break the default association but simply to assign a value to $$. By default the tt(SType)'s shared pointer is zero, and the assignment initializes the semantic value to a value of the proper type.nl() Assuming a lexical scanner may return a tt(NR) token, offering an tt(int number() const) accessor, then part of an tt(expr) rule could be: verb( expr: NR { $$ = d_scanner.number(); } ... ) whereafter tt(expr)'s semantic value has been initialized to a tt(Semantic). it() tt(DataType &get())nl() This tt(get) member returns a reference to the (modifiable) semantic value stored inside tt(Semantic).nl() This member checks for 0-pointers and for tt(Tag__) mismatches between the requested and actual tt(Tag__), in that case replacing the current tt(Semantic) object pointed to by a new tt(Semantic) object of the type associated with the requested tt(Tag__). However, if that type does not provide a default constructor then a tt(runtime_error) exception is thrown holding the description verb( STYPE::get: no default constructor available ) it() tt(ReturnType data() const)nl() Here, tt(ReturnType) refers to the semantic value stored inside tt(Semantic). If the type-name is a built-in type a copy of the value is returned, otherwise a reference to a constant object is returned;nl() This is a (partially) em(unchecking) variant of the corresponing tt(get) member, resulting in a em(segfault) if used when the tt(shared_ptr) holds a 0-pointer, and throwing a tt(std::bad_cast) in case of a mismatch between the requested and actual tt(Tag__). it() tt(DataType &data())nl() This member returns a reference to the (modifiable) semantic value stored inside tt(Semantic).nl() This is a (partially) em(unchecking) variant of the corresponing tt(get) member, resulting in a em(segfault) if used when the tt(shared_ptr) holds a 0-pointer, and throwing a tt(std::bad_cast) in case of a mismatch between the requested and actual tt(Tag__). it() tt(void emplace(Args &&...args))nl() This member perfectly forwars its tt(args) arguments to the currently stored tt(Semantic) value type, replacing its current value by the newly constructed tt(Semantic) value. ) These members may explicitly be tagged, using constructions like verb( SS.emplace(5, 'c'); ) But the shorthand tt(($)) can also be used, which automatically provides the correct tag: verb( (SS).emplace(5, 'c'); ) ) When an incorrect tag is specified (e.g., with tt(get(), $$), or tt($1)), the generated code correctly compiles, but the program likely throws a tt(std::bad_cast) exception once the offending code is executed. bf(Additional Headers) When using tt(%polymorphic) three additional header files are included by tt(parserbase.h): itemization( itt(memory,) required for tt(std::shared_ptr); itt(stdexcept,) required for tt(std::logic_error); itt(type_traits,) required for the implementation of one of tt(SType)'s overloaded assignment operators. ) bisonc++-4.13.01/documentation/manual/grammar/nested.yo0000644000175000017500000000314512633316117021664 0ustar frankfrankFinally, we add the em(nested) rule to our bag of rule-tricks. Again, nested rules appear frequently: parenthesized expressions and compound statements are two very well known examples. These kind of rules are characterized by the fact that the nested variant is itself an example of the element appearing in the nested variant. The definition of a statement is actually a bit more complex than the definition of an expression, since the statement appearing in the compound statement is in fact an optional series of elements. Let's first have a look at the nested expression rule. Here it is, in a basic form: verb( expr: NUMBER | ID | expr '+' expr | ... | '(' expr ')' ; ) This definition is simply characterized that the non-terminal tt(expr) appears within a set of parentheses, which is not too complex. The definition of tt(opt_statements), however, is a bit more complex. But acknowledging the fact that a tt(statement) contains among other elements a compound statement, and that a compound statement, in turn, contains tt(opt_statements) an tt(opt_statements) construction can be formulated accordingly: verb( opt_statements: // define an optional series // empty | opt_statements statement ; statement: // define alternatives for `statement' expr_statement | if_statement | ... | compound_statement ; compound_statement: // define the compound statement itself '{' opt_statements '}' ; ) bisonc++-4.13.01/documentation/manual/grammar/actions.yo0000644000175000017500000000762612633316117022052 0ustar frankfrankAn action accompanies a syntactic rule and contains bf(C++) code to be executed each time an instance of that rule is recognized. The task of most actions is to compute a semantic value for the grouping built by the rule from the semantic values associated with tokens or smaller groupings. An action consists of bf(C++) statements surrounded by braces, much like a compound statement in bf(C++). It can be placed at any position in the rule; it is executed at that position. Most rules have just one action at the end of the rule, following all the components. Actions in the middle of a rule are tricky and should be used only for special purposes (see section ref(MIDACTIONS)). The bf(C++) code in an action can refer to the semantic values of the components matched by the rule with the construct tt($n), which stands for the value of the nth component. The semantic value for the grouping being constructed is tt($$). (B() translates both of these constructs into array element references when it copies the actions into the parser file.) Here is a typical example: verb( exp: ... | exp '+' exp { $$ = $1 + $3; } | ... ) This rule constructs an tt(exp) from two smaller exp groupings connected by a plus-sign token. In the action, tt($1) and tt($3) refer to the semantic values of the two component exp groupings, which are the first and third symbols on the right hand side of the rule. The sum is stored into tt($$) so that it becomes the semantic value of the addition-expression just recognized by the rule. If there were a useful semantic value associated with the `+' token, it could be referred to as tt($2). If you don't specify an action for a rule, b() supplies a default: tt($$ = $1). Thus, the value of the first symbol in the rule becomes the value of the whole rule. Of course, the default rule is valid only if the two data types match. There is no meaningful default action for an empty rule; every empty rule must have an explicit action unless the rule's value does not matter. Note that the default tt($$) value is assigned at the em(beginning) of an action block. Any changes to tt($1) are therefore em(not) automatically propagated to tt($$). E.g., assuming that tt($1 == 3) at the beginning of the following action block, then tt($$) will still be equal to 3 after executing the statement in the action block: verb( { // assume: $1 == 3 $1 += 12; // $1 now is 15, $$ remains 3 } ) If tt($$) should receive the value of the modified tt($1), then tt($$) must explicitly be assigned to tt($$). E.g., verb( { // assume: $1 == 3 $1 += 12; // $1 now is 15, $$ remains 3 $$ = $1; // now $$ == 15 as well. } ) Using tt($n) with n equal to zero or a negative value is allowed for reference to tokens and groupings on the stack before those that match the current rule. This is a very em(risky) practice, and to use it reliably you must be certain of the context in which the rule is applied. Here is a case in which you can use this reliably: verb( foo: expr bar '+' expr { ... } | expr bar '-' expr { ... } ; bar: // empty | { previous_expr = $0; } ; ) As long as tt(bar) is used em(only) in the fashion shown here, tt($0) always refers to the tt(expr) which precedes bar in the definition of tt(foo). But as mentioned: it's a risky practice, which should be avoided if at all possible. See also section ref(SPECIAL). All tt($)-type variables used in action blocks can be modified. All numbered tt($)-variables are deleted when a production rule has been recognized. Unless an action explicitly assigns a value to tt($$), the (possibly modified) tt($1) value is assigned to tt($$) when a production rule has been recognized. bisonc++-4.13.01/documentation/manual/grammar/series.yo0000644000175000017500000000211012633316117021663 0ustar frankfrankA series of elements normally use left-recursion. For example, bf(C++) supports em(string concatenation): series of double quote delimited tt(ASCII) characters define a string, and multiple white-space delimited strings are handled as one single string: verb( "hello" // multiple ws-delimited strings " " "world" "hello world" // same thing ) Usually a parser is responsible for concatenating the individual string-parts, receiving one or more tt(STRING) tokens from the lexical scanner. A tt(string) rule handles one or more incoming tt(STRING) tokens: verb( string: STRING | string STRING ) The above rule can be used as a prototype for recognizing a series of elements. The token tt(STRING) may of course be embedded in another rule. The generic form of this rule could be formulated as follows: verb( series: unit | series unit ) Note that the single element is em(first) recognized, whereafter the left-recursive alternative may be recognized repeatedly. bisonc++-4.13.01/documentation/manual/grammar/union.yo0000644000175000017500000000254012633316117021530 0ustar frankfrankIn many programs, different kinds of data types are used in combination with different kinds of terminal and non-terminal tokens. For example, a numeric constant may need type tt(int) or tt(double), while a string needs type tt(std::string), and an identifier might need a pointer to an entry in a symbol table. To use more than one data type for semantic values in one parser, b() offers the following feature: itemization( it() Define polymorphic semantic values, associating (non)terminals with their proper semantic types (cf section ref(POLYMORPHIC)), and associate (non-)terminal tokens with their appropriate semantic values; it() Specify the entire collection of possible data types, using a tt(%union) directive (see section ref(UNION)), and associate (non-)terminal tokens with their appropriate semantic values; it() Define your own class handling the various semantic values, and associate that class with the parser's semantic value type using the tt(%stype) directive. The association of (non-)terminal tokens and specific value types is handled by your own class. ) The first approach (and to a lesser extent, the second approach) has the advantage that b() is able to enforce the correct association between semantic types and rules and/or tokens, and that b() is able to check the type-correctness of assignments to rule results. bisonc++-4.13.01/documentation/manual/grammar/poly/0000755000175000017500000000000012633316117021011 5ustar frankfrankbisonc++-4.13.01/documentation/manual/grammar/poly/main.cc0000644000175000017500000000011212633316117022236 0ustar frankfrank#include "main.ih" int main() { Parser parser; parser.parse(); } bisonc++-4.13.01/documentation/manual/grammar/poly/scanner/0000755000175000017500000000000012633316117022442 5ustar frankfrankbisonc++-4.13.01/documentation/manual/grammar/poly/scanner/lexer0000644000175000017500000000034512633316117023506 0ustar frankfrank%interactive %filenames scanner %% [ \t]+ // skip white space [0-9]+ return Parser::INT; [a-zA-Z_][a-zA-Z0-9_]* return Parser::IDENTIFIER; .|\n return matched()[0]; bisonc++-4.13.01/documentation/manual/grammar/poly/scanner/scanner.ih0000644000175000017500000000012012633316117024406 0ustar frankfrank#include "scanner.h" #include "../parser/parserbase.h" // end of scanner.hh bisonc++-4.13.01/documentation/manual/grammar/poly/scanner/scanner.h0000644000175000017500000000203512633316117024244 0ustar frankfrank// Generated by Flexc++ V0.95.00 on Tue, 06 Mar 2012 22:37:41 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" // $insert classHead class Scanner: public ScannerBase { public: explicit Scanner(std::istream &in = std::cin, std::ostream &out = std::cout); // $insert lexFunctionDecl int lex(); private: int lex__(); int executeAction__(size_t ruleNr); void print(); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__); }; // $insert scannerConstructors inline Scanner::Scanner(std::istream &in, std::ostream &out) : ScannerBase(in, out) {} inline void Scanner::preCode() { // optionally replace by your own code } inline void Scanner::postCode(PostEnum__) {} inline void Scanner::print() { print__(); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/documentation/manual/grammar/poly/build0000755000175000017500000000030112633316117022030 0ustar frankfrank#!/bin/bash mkdir -p tmp/bin cd parser bisonc++ grammar cd ../scanner flexc++ lexer cd ../tmp g++ --std=c++14 -Wall -O2 -o bin/binary \ ../parser/*.cc ../scanner/*.cc ../*.cc -lbobcat bisonc++-4.13.01/documentation/manual/grammar/poly/parser/0000755000175000017500000000000012633316117022305 5ustar frankfrankbisonc++-4.13.01/documentation/manual/grammar/poly/parser/intvalue.cc0000644000175000017500000000027112633316117024443 0ustar frankfrank#include "parser.ih" int Parser::intValue() const { istringstream in(d_scanner.matched()); int ret; in >> ret; // succeeds, as lex() just returned 'NR' return ret; } bisonc++-4.13.01/documentation/manual/grammar/poly/parser/parser.h0000644000175000017500000000175112633316117023756 0ustar frankfrank// Generated by Bisonc++ V4.10.01 on Fri, 28 Aug 2015 16:00:46 +0200 #ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #undef Parser class Parser: public ParserBase { // $insert scannerobject Scanner d_scanner; public: int parse(); private: int intValue() const; void process(STYPE__ const &semVal) const; void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); }; #endif bisonc++-4.13.01/documentation/manual/grammar/poly/parser/process.cc0000644000175000017500000000040612633316117024272 0ustar frankfrank#include "parser.ih" void Parser::process(STYPE__ const &semVal) const { if (semVal.tag() == Tag__::INT) cout << "Saw an int-value: " << semVal.get() << '\n'; else cout << "Saw text: " << semVal.get() << '\n'; } bisonc++-4.13.01/documentation/manual/grammar/poly/parser/grammar0000644000175000017500000000132212633316117023654 0ustar frankfrank%filenames parser %scanner ../scanner/scanner.h %polymorphic INT: int; TEXT: std::string; %token INT IDENTIFIER %type identifier %type int %type combi %% start: prompt rules ; prompt: { cout << "? "; } ; identifier: IDENTIFIER { $$ = d_scanner.matched(); } ; int: INT { $$ = intValue(); } ; combi: int | identifier ; rule: identifier '(' identifier ')' '\n' { cout << $1 << " " << $3 << '\n'; } | identifier '=' int '\n' { cout << $1 << " " << $3 << '\n'; } | combi '\n' ; rulePrompt: rule prompt ; rules: rules rulePrompt | rulePrompt ; bisonc++-4.13.01/documentation/manual/grammar/poly/parser/parser.ih0000644000175000017500000000117312633316117024125 0ustar frankfrank// Generated by Bisonc++ V4.10.01 on Fri, 28 Aug 2015 16:00:46 +0200 // Include this file in the sources of the class Parser. // $insert class.h #include "parser.h" inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::print() { print__(); // displays tokens if --print was specified } inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } #include using namespace std; bisonc++-4.13.01/documentation/manual/grammar/poly/README0000644000175000017500000000111312633316117021665 0ustar frankfrankBisonc++ and flexc++ are required to generate the parser and the lexical scanner. If you have icmake installed, simply run 'icmbuild' to create the program Otherwise the ./build command should work. ./tmp/bin/binary starts an interactive program, each line should either contain ident ( ident ) (blanks are optional, ident is any C++ identifier) or ident = int (int: any non-negative int-value) To end the program: ^D or ^C. Any other input also ends the program (with a syntax error message) See also the explanation of the implementation of operator= in ../essence. bisonc++-4.13.01/documentation/manual/grammar/poly/main.ih0000644000175000017500000000003312633316117022253 0ustar frankfrank#include "parser/parser.h" bisonc++-4.13.01/documentation/manual/grammar/poly/icmconf0000644000175000017500000000677312633316117022367 0ustar frankfrank // Inspect the following #defines. Change them to taste. If you don't // need a particular option, change its value into an empty string // should commands be echoed (ON) or not (OFF) ? #define USE_ECHO ON // The final program and source containing main(): // =============================================== // define the name of the program to create: #define BINARY "../../poly" // define the name of the source containing main(): #define MAIN "main.cc" // #defines used for compilation and linking: // ========================================== // define the compiler to use: #define COMPILER "g++ --std=c++14 -fdiagnostics-color=never" // define the compiler options to use: #define COMPILER_OPTIONS "-Wall -O2" // define the pattern to locate sources in a directory: #define SOURCES "*.cc" // define the options used for linking: #define LINKER_OPTIONS "-s" // define any additional libraries BINARY may need: #define ADD_LIBRARIES "bobcat" // define any additional paths (other than the standard paths) the // additional libraries are located in: #define ADD_LIBRARY_PATHS "" // #defines used for the final product: // ==================================== #define BIN_INSTALL "/usr/local/bin" // Some advanced #defines, used to create parsers and lexical scanners // =================================================================== // Lexical Scanner section // ======================= // Should a lexical scanner be constructed? If so, define the subdirectory // containing the scanner's specification file. #define SCANNER_DIR "scanner" // What is the program generating the lexical scanner? #define SCANGEN "flexc++" // Flags to provide SCANGEN with: #define SCANFLAGS "" // Name of the lexical scanner specification file #define SCANSPEC "lexer" // Name of the file generated by the lexical scanner #define SCANOUT "lex.cc" // Parser section // ============== // Should a parser be constructed? If so, define the subdirectory // containing the parser's specification file #define PARSER_DIR "parser" // If a parser must be constructed, should the script (provided in the // skeleton file parser/gramspec/grambuild) `parser/gramspec/grambuild' // **NOT** be called? If it must NOT be called, comment out the following // #define directive: // #define GRAMBUILD // What it the program generating a parser? #define PARSGEN "bisonc++" // What it the grammar specificication file? #define PARSSPEC "grammar" // Flags to provide PARSGEN with: #define PARSFLAGS "-V" // Name of the file generated by the parser generator containing the // parser function #define PARSOUT "parse.cc" // Additional defines, which should normally not be modified // ========================================================= // Directory below this directory to contain temporary results #define TMP_DIR "tmp" // Local program library to use (change to an empty string if you want to // use the object modules themselves, rather than a library) #define LIBRARY "modules" // The extension of object modules: #define OBJ_EXT ".o" // below #define DEFCOM "program" or "library" may be added by icmstart #define DEFCOM "program" bisonc++-4.13.01/documentation/manual/grammar/actiontypes.yo0000644000175000017500000000234512633316117022745 0ustar frankfrankIf you have chosen a single data type for semantic values, the tt($$) and tt($n) constructs always have that data type. If you have used a tt(%union) directive to specify a variety of data types, then you must declare a choice among these types for each terminal or nonterminal symbol that can have a semantic value. Then each time you use tt($$) or tt($n), its data type is determined by which symbol it refers to in the rule. In this example, verb( exp: ... | exp '+' exp { $$ = $1 + $3; } ) tt($1) and tt($3) refer to instances of exp, so they all have the data type declared for the nonterminal symbol exp. If tt($2) were used, it would have the data type declared for the terminal symbol 'tt(+)', whatever that might be. Alternatively, you can specify the data type when you refer to the value, by inserting `tt()' after the `tt($)' at the beginning of the reference. For example, if you have defined types as shown here: verb( %union { int u_int; double u_double; }; ) then you can write tt($1) to refer to the first subunit of the rule as an integer, or tt($1) to refer to it as a double. bisonc++-4.13.01/documentation/manual/grammar/alternatives.yo0000644000175000017500000000161312633316117023101 0ustar frankfrankSimple alternatives can be specified using the vertical bar (tt(|)). Each alternative should begin with a unique identifying terminal token. The terminal token may actually be hidden in a non-terminal rule, in which case that nonterminal can be used as an alias for the non-terminal. In fact identical terminal tokens may be used if at some point the terminal tokens differ over different alternatives. Here are some examples: verb( // Example 1: plain terminal distinguishing tokens expr: ID | NUMBER ; // Example 2: nested terminal distinguishing tokens expr: id | number ; id: ID ; number: NUMBER ; // Example 3: eventually diverting routes expr: ID id | ID number ; id: ID ; number: NUMBER ; ) bisonc++-4.13.01/documentation/manual/examples/0000755000175000017500000000000012633316117020216 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/rpnlex.yo0000644000175000017500000000334112633316117022100 0ustar frankfrankThe lexical analyzer's job is low-level parsing: converting characters or sequences of characters into tokens. The b() parser gets its tokens by calling the lexical analyzer tt(lex()), which is a predeclared member of the parser class. See section ref(LEX). Only a simple lexical analyzer is needed for tt(rpn). This lexical analyzer skips blanks and tabs, then reads in numbers as double and returns them as tt(NUM) tokens. Any other character that isn't part of a number is a separate token. Note that the token-code for such a single-character token is the character itself. The return value of the lexical analyzer function is a numeric code which represents a token type. The same text used in b() rules to stand for this token type is also a bf(C++) expression for the numeric code for the type. This works in two ways. If the token type is a character literal, then its numeric code is the tt(ASCII) code for that character; you can use the same character literal in the lexical analyzer to express the number. If the token type is an identifier, that identifier is defined by b() as a bf(C++) enumeration value. In this example, therefore, tt(NUM) becomes an enumeration value for tt(lex()) to return. The semantic value of the token (if it has one) is stored into the parser's data member tt(d_val) (comparable to the variable tt(yylval) used by, e.g., Bison). This data member has tt(int) as its default type, but by specifying tt(%stype) in the directive section this default type can be modified (to, e.g., tt(double)). A token value of zero is returned once end-of-file is encountered. (B() recognizes any nonpositive value as indicating the end of the input). Here is the lexical scanner's implementation: verbinclude(rpn/parser/lex.cc) bisonc++-4.13.01/documentation/manual/examples/calc.yo0000644000175000017500000000314712633316117021476 0ustar frankfrankWe now modify tt(rpn) to handle infix operators instead of postfix. Infix notation involves the concept of operator precedence and the need for parentheses nested to arbitrary depth. Here is the b() grammar specification for tt(calc), an infix desk-top calculator: verbinclude(calc/parser/grammar) The functions tt(lex()), tt(error()) and tt(main()) can be the same as used with tt(rpn). There are two important new features shown in this code. In the second section (B() directives), tt(%left) declares token types and says they are left-associative operators. The directives tt(%left) and tt(%right) (right associativity) take the place of tt(%token) which is used to declare a token type name without associativity. (These tokens are single-character literals, which ordinarily don't need to be declared. We declare them here to specify the associativity.) Operator precedence is determined by the line ordering of the directives; the higher the line number of the directive (lower on the page or screen), the higher the precedence. Hence, exponentiation has the highest precedence, unary minus (tt(NEG)) is next, followed by `tt(*)' and `tt(/)', and so on. See section ref(PRECEDENCE). The other important new feature is the tt(%prec) in the grammar section for the unary minus operator. The tt(%prec) simply instructs b() that the rule `tt(| '-' exp)' has the same precedence as tt(NEG) (in this case the next-to-highest). See section ref(CONDEP). Here is a sample run of tt(calc): verb( % calc 4 + 4.5 - (34/(8*3+-3)) 6.88095 -56 + 2 -54 3 ^ 2 9 ) bisonc++-4.13.01/documentation/manual/examples/rpnparser.yo0000644000175000017500000000275612633316117022615 0ustar frankfrankBefore running b() to produce a parser class, we need to decide how to arrange all the source code in one or more source files. Even though the example is fairly simple, all user-defined functions should be defined in source files of their own. For tt(rpn) this means that a source file tt(rpn.cc) is constructed holding tt(main()), and a file tt(parser/lex.cc) holding the lexical scanner's implementation. Note that I've put all the parser's files in a separate directory as well (also see section ref(LAYOUT)). In url(rpn's parser)(examples/rpn/parser) directory the file tt(grammar) holds the grammar specification. B() constructs a parser class and a parsing member function from this file after issuing the command: verb( b() grammar ) From this, b() produced the following files: itemization( itt(Parser.h), the parser class definition; itt(Parserbase.h), the parser's em(base) class definition, defining, among other, the grammatical tokens to be used by externally defined lexical scanners; itt(Parser.ih), the em(internal header file), to be included by all implementations of the parser class' members; itt(parse.cc), the parsing member function. ) By default, tt(Parserbase.h) and tt(parse.cc) will be em(re-created) each time b() is re-run. tt(Parser.h) and tt(Parser.ih) may safely be modified by the programmer, e.g., to add new members to to the parser class. These two files will not be overwritten by b(), unless explicitly instructed to do so. bisonc++-4.13.01/documentation/manual/examples/rpn/0000755000175000017500000000000012633316117021015 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/rpn/build0000755000175000017500000000042512633316117022043 0ustar frankfrank#!/bin/bash case $1 in (clean) rm parser/[pP]arse* rpn ;; (rpn) cd parser bisonc++ -V -l grammar cd .. g++ -Wall -o rpn *.cc */*.cc ;; (*) echo "$0 [clean|rpn] to clean or build the rpn program" ;; esac bisonc++-4.13.01/documentation/manual/examples/rpn/rpn.h0000644000175000017500000000020112633316117021756 0ustar frankfrank#ifndef _INCLUDED_RPN_H_ #define _INCLUDED_RPN_H_ #include #include "parser/Parser.h" using namespace std; #endif bisonc++-4.13.01/documentation/manual/examples/rpn/parser/0000755000175000017500000000000012633316117022311 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/rpn/parser/grammar0000644000175000017500000000144612633316117023667 0ustar frankfrank//DECL %baseclass-preinclude cmath %token NUM %stype double //= %% //RULES input: // empty | input line ; line: '\n' | exp '\n' { std::cout << "\t" << $1 << std::endl; } ; exp: NUM | exp exp '+' { $$ = $1 + $2; } | exp exp '-' { $$ = $1 - $2; } | exp exp '*' { $$ = $1 * $2; } | exp exp '/' { $$ = $1 / $2; } | // Exponentiation: exp exp '^' { $$ = pow($1, $2); } | // Unary minus: exp 'n' { $$ = -$1; } ; //= bisonc++-4.13.01/documentation/manual/examples/rpn/parser/lex.cc0000644000175000017500000000154612633316117023416 0ustar frankfrank#include "Parser.ih" /* Lexical scanner returns a double floating point number on the stack and the token NUM, or the ASCII character read if not a number. Skips all blanks and tabs, returns 0 for EOF. */ int Parser::lex() { char c; // get the next non-ws character while (std::cin.get(c) && c == ' ' || c == '\t') ; if (!std::cin) // no characters were obtained return 0; // indicate End Of Input if (c == '.' || isdigit (c)) // if a digit char was found { std::cin.putback(c); // return the character std::cin >> d_val; // extract a number return NUM; // return the NUM token } return c; // otherwise return the extracted char. } bisonc++-4.13.01/documentation/manual/examples/rpn/rpn.cc0000644000175000017500000000020312633316117022116 0ustar frankfrank/* rpn.cc */ #include "rpn.h" int main() { Parser parser; parser.parse(); return 0; } bisonc++-4.13.01/documentation/manual/examples/mfbuild.yo0000644000175000017500000000223712633316117022215 0ustar frankfrankIn order to construct tt(mfcalc), the following steps are suggested: itemization( it() Construct a program tt(mfcalc.cc). Actually, it is already available, since all implementations of tt(main()) used so far are identical to each other. it() Construct the parser in a subdirectory tt(parser): itemization( it() First, construct b()'s input file as indicated above. Name this file tt(grammar); it() Run tt(bisonc++ grammar) to produce the files tt(Parser.h), tt(Parserbase.h), tt(Parser.ih) and tt(parse.cc); it() Modify tt(Parser.h) so as to include tt(FunctionPair, s_functions, s_funTab) and tt(d_symbols); it() Modify tt(Parser.ih) so as to include tt(cmath) and optionally `tt(using namespace std)', which is commented out by default; it() Implement tt(data.cc) and tt(lex.cc) to initialize the static data and to contain the lexical scanner, respectively. ) it() Now construct tt(mfcalc) in tt(mfcalc.cc)'s directory using the following command: verb( g++ -o mfcalc *.cc parser/*.cc ) ) bisonc++-4.13.01/documentation/manual/examples/mfgrammar.yo0000644000175000017500000000033612633316117022542 0ustar frankfrankHere are the grammar rules for the multi-function calculator. Most of them are copied directly from tt(calc). Three rules, those which mention tt(VAR) or tt(FNCT), are new: verbinclude(mfcalc/parser/grammar.rules) bisonc++-4.13.01/documentation/manual/examples/rpninput.yo0000644000175000017500000000247712633316117022460 0ustar frankfrankConsider the definition of tt(input): verb( input: // empty | input line ; ) This definition reads as follows: em(A complete input is either an empty string, or a complete input followed by an input line). Notice that `complete input' is defined in terms of itself. This definition is said to be em(left recursive) since input appears always as the leftmost symbol in the sequence. See section ref(RECURSIVE). The first alternative is empty because there are no symbols between the colon and the first `tt(|)'; this means that input can match an empty string of input (no tokens). We write the rules this way because it is legitimate to type tt(Ctrl-d) right after you start the calculator. It's conventional to put an empty alternative first and write the comment `tt(// empty)' in it. The second alternate rule (tt(input line)) handles all nontrivial input. It means em(After reading any number of lines, read one more line if possible). The left recursion makes this rule into a loop. Since the first alternative matches empty input, the loop can be executed zero or more times. The parser's parsing function (tt(parse())) continues to process input until a grammatical error is seen or the lexical analyzer says there are no more input tokens, which occurs at end of file. bisonc++-4.13.01/documentation/manual/examples/mflex.yo0000644000175000017500000000551512633316117021710 0ustar frankfrankIn tt(mfcalc), the parser's member function tt(lex()) must now recognize variables, function names, numeric values, and the single-character arithmetic operators. Strings of alphanumeric characters with a leading nondigit are recognized as either variables or functions depending on the table in which they are found. By arranging tt(lex())'s logic such that the function table is searched first it is simple to ensure that no variable can ever have the name of a predefined function. The currently implemented approach, in which two different tables are used for the arithmetic functions and the variable symbols is appealing because it's simple to implement. However, it also has the drawback of being difficult to scale to more generic calculators, using, e.g., different data types and different types of functions. In such situations a single symbol table is more preferable, where the keys are the identifiers (variables, function names, predefined constants, etc.) while the values are objects describing their characteristics. A re-implementation of tt(mfcalc) using an integrated symbol table is suggested in one of the exercises of the upcoming section ref(EXERCISES). The parser's tt(lex()) member uses the following approach: itemization( it() All leading blanks and tabs are skipped it() If no (other) character could be obtained 0 is returned, indicating End-Of-File. it() If the first non-blank character is a dot or number, a number is extracted from the standard input. Since the semantic value data member of tt(mfcalc)'s parser (tt(d_val)) is itself also a tt(union), the numerical value can be extracted into tt(d_val.u_val), and a tt(NUM) token can be returned. it() If the first non-blank character is not a letter, then a single-character token was received and the character's value is returned as the next token. it() Otherwise the read character is a letter. This character and all subsequent alpha-numeric characters are extracted to construct the name of an identifier. Then this identifier is searched for in the tt(s_functions) map. If found, tt(d_val.u_fun) is given the function's address, found as the value of the tt(s_functions) map element corresponding to the read identifier, and token tt(FNCT) is returned. If the symbol is not found in tt(s_functions) the address of the value ofn tt(d_symbols) associated with the received identifier is assigned to tt(d_val.u_symbol) and token tt(VAR) is returned. Note that this automatically defines newly used variables, since tt(d_symbols[name]) automatically inserts a new element in a map if tt(d_symbol[name]) wasn't already there. ) Here is tt(mfcalc)'s parser's tt(lex()) member function: verbinclude(mfcalc/parser/lex.cc) bisonc++-4.13.01/documentation/manual/examples/intro.yo0000644000175000017500000000240612633316117021724 0ustar frankfrankNow we show and explain three sample programs written using b(): a reverse polish notation calculator, an algebraic (infix) notation calculator, and a multi-function calculator. All three have been tested under Linux (kernel 2.4.24 and above); each produces a usable, though limited, interactive desk-top calculator. These examples are simple, but b() grammars for real programming languages are written the same way. You can copy these examples from this dument into source files to try them yourself. Also, the b() package contains the various source files ready for use. itemization( it() Reverse Polish Notation Calculator (section ref(RPN)): A first example of a calculator not requiring any operator precedence. it() Infix Notation Calculator (section ref(CALC)): Infix (algebraic) notation calculator, introducing. operator precedence. it() Simple Error Recovery (section ref(ERROR)): How to continue after syntactic errors. it() Multi-Function Calculator (section ref(MFCALC)): Calculator having memory and trigonometrical functions. It uses multiple data-types for semantic values. it() Suggested Exercises (section ref(EXERCISES)): Ideas for improving the multi-function calculator. ) bisonc++-4.13.01/documentation/manual/examples/mfcalc.yo0000644000175000017500000000222612633316117022016 0ustar frankfrankNow that the basics of b() have been discussed, it is time to move on to a more advanced problem. The above calculators provided only five functions, `tt(+)', `tt(-)', `tt(*)', `tt(/)' and `tt(^)'. It would be nice to have a calculator that provides other mathematical functions such as tt(sin), tt(cos), etc.. It is easy to add new operators to the infix calculator as long as they are only single-character literals. The parser's member tt(lex()) passes back all non-number characters as tokens, so new grammar rules suffice for adding a new operator. But we want something more flexible: built-in functions whose syntaxis is as follows: verb( function_name (argument) ) At the same time, we add memory to the calculator, thus allowing you to create named variables, store values in them, and use them later. Here is a sample session with the multi-function calculator: verb( pi = 3.141592653589 3.14159 sin(pi) 7.93266e-13 alpha = beta1 = 2.3 2.3 alpha 2.3 ln(alpha) 0.832909 exp(ln(beta1)) 2.3 ) Note that multiple assignment and nested function calls are permitted. bisonc++-4.13.01/documentation/manual/examples/calc/0000755000175000017500000000000012633316117021120 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/calc/calc.h0000644000175000017500000000020312633316117022166 0ustar frankfrank#ifndef _INCLUDED_CALC_H_ #define _INCLUDED_CALC_H_ #include #include "parser/Parser.h" using namespace std; #endif bisonc++-4.13.01/documentation/manual/examples/calc/build0000755000175000017500000000043212633316117022144 0ustar frankfrank#!/bin/bash case $1 in (clean) rm parser/[pP]arse* calc ;; (calc) cd parser bisonc++ -V -l grammar cd .. g++ -Wall -o calc *.cc */*.cc ;; (*) echo "$0 [clean|calc] to clean or build the calc program" ;; esac bisonc++-4.13.01/documentation/manual/examples/calc/calc.cc0000644000175000017500000000020512633316117022326 0ustar frankfrank/* calc.cc */ #include "calc.h" int main() { Parser parser; parser.parse(); return 0; } bisonc++-4.13.01/documentation/manual/examples/calc/parser/0000755000175000017500000000000012633316117022414 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/calc/parser/Parser.ih0000644000175000017500000000056012633316117024173 0ustar frankfrank // Include this file in the sources of the class Parser. // $insert class.h #include "Parser.h" // Add below here any includes etc. that are only // required for the compilation of Parser's sources. // UN-comment the next using-declaration if you want to use // symbols from the namespace std without specifying std:: //using namespace std; bisonc++-4.13.01/documentation/manual/examples/calc/parser/Parserbase.h0000644000175000017500000000362612633316117024663 0ustar frankfrank#ifndef ParserBase_h_included #define ParserBase_h_included #include #include // $insert preincludes #include "cmath" namespace // anonymous { struct PI; } class ParserBase { public: // $insert tokens // Symbolic tokens: enum Tokens { NUM = 257, NEG, }; // $insert STYPE typedef double STYPE; private: int d_stackIdx; std::vector d_stateStack; std::vector d_valueStack; protected: enum Return { PARSE_ACCEPT = 0, // values used as parse()'s return values PARSE_ABORT = 1 }; enum ErrorRecovery { DEFAULT_RECOVERY_MODE, UNEXPECTED_TOKEN, }; bool d_debug; size_t d_nErrors; int d_token; int d_nextToken; size_t d_state; STYPE *d_vsp; STYPE d_val; ParserBase(); void ABORT() const throw(Return); void ACCEPT() const throw(Return); void ERROR() const throw(ErrorRecovery); void checkEOF() const; void clearin(); bool debug() const; void pop(size_t count = 1); void push(size_t nextState); void reduce(PI const &productionInfo); size_t top() const; public: void setDebug(bool mode); }; inline bool ParserBase::debug() const { return d_debug; } inline void ParserBase::setDebug(bool mode) { d_debug = mode; } inline void ParserBase::ABORT() const throw(Return) { throw PARSE_ABORT; } inline void ParserBase::ACCEPT() const throw(Return) { throw PARSE_ACCEPT; } inline void ParserBase::ERROR() const throw(ErrorRecovery) { throw UNEXPECTED_TOKEN; } // As a convenience, when including ParserBase.h its symbols are available as // symbols in the class Parser, too. #define Parser ParserBase #endif bisonc++-4.13.01/documentation/manual/examples/calc/parser/grammar0000644000175000017500000000160012633316117023762 0ustar frankfrank%baseclass-preinclude cmath %stype double %token NUM %left '-' '+' %left '*' '/' %left NEG // negation--unary minus %right '^' // exponentiation %% input: // empty | input line ; line: '\n' | exp '\n' { std::cout << "\t" << $1 << '\n'; } ; exp: NUM | exp '+' exp { $$ = $1 + $3; } | exp '-' exp { $$ = $1 - $3; } | exp '*' exp { $$ = $1 * $3; } | exp '/' exp { $$ = $1 / $3; } | '-' exp %prec NEG { $$ = -$2; } | // Exponentiation: exp '^' exp { $$ = pow($1, $3); } | '(' exp ')' { $$ = $2; } ; bisonc++-4.13.01/documentation/manual/examples/calc/parser/Parser.h0000644000175000017500000000145512633316117024026 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "Parserbase.h" #undef Parser class Parser: public ParserBase { public: int parse(); private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); }; inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline void Parser::print() // use d_token, d_loc {} #endif bisonc++-4.13.01/documentation/manual/examples/calc/parser/lex.cc0000644000175000017500000000154612633316117023521 0ustar frankfrank#include "Parser.ih" /* Lexical scanner returns a double floating point number on the stack and the token NUM, or the ASCII character read if not a number. Skips all blanks and tabs, returns 0 for EOF. */ int Parser::lex() { char c; // get the next non-ws character while (std::cin.get(c) && c == ' ' || c == '\t') ; if (!std::cin) // no characters were obtained return 0; // indicate End Of Input if (c == '.' || isdigit (c)) // if a digit char was found { std::cin.putback(c); // return the character std::cin >> d_val; // extract a number return NUM; // return the NUM token } return c; // otherwise return the extracted char. } bisonc++-4.13.01/documentation/manual/examples/calc/parser/parse.output0000644000175000017500000003053612633316117025017 0ustar frankfrank Production Rules: 1: input -> 2: input -> input line 3: line -> '\n' 4: line -> exp '\n' 5: exp -> NUM 6: exp -> exp '+' exp 7: exp -> exp '-' exp 8: exp -> exp '*' exp 9: exp -> exp '/' exp 10: exp -> '-' exp 11: exp -> exp '^' exp 12: exp -> '(' exp ')' 13: input_$ -> input GRAMMAR STATES: State 0 input_$ -> . input (rule 13) Lookahead set { } All production rules (using dot == 0) of: input Lookahead set { NUM '-' '\n' '(' } on input: shift, and go to state 1 _default_: reduce, using production 1: input -> State 1 input_$ -> input . (rule 13) Lookahead set { } input -> input . line (rule 2) Lookahead set { NUM '-' '\n' '(' } All production rules (using dot == 0) of: line Lookahead set { NUM '-' '\n' '(' } exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 2 on '-': shift, and go to state 3 on '\n': shift, and go to state 5 on '(': shift, and go to state 7 on line: shift, and go to state 4 on exp: shift, and go to state 6 State 2 (inherited terminal: NUM) exp -> NUM . (rule 5) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } _default_: reduce, using production 5: exp -> NUM State 3 (inherited terminal: '-') exp -> '-' . exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 2 on '-': shift, and go to state 3 on '(': shift, and go to state 7 on exp: shift, and go to state 8 State 4 input -> input line . (rule 2) Lookahead set { NUM '-' '\n' '(' } _default_: reduce, using production 2: input -> input line State 5 (inherited terminal: '\n') line -> '\n' . (rule 3) Lookahead set { NUM '-' '\n' '(' } _default_: reduce, using production 3: line -> '\n' State 6 line -> exp . '\n' (rule 4) Lookahead set { NUM '-' '\n' '(' } exp -> exp . '+' exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '-': shift, and go to state 9 on '+': shift, and go to state 10 on '*': shift, and go to state 11 on '/': shift, and go to state 12 on '^': shift, and go to state 13 on '\n': shift, and go to state 14 State 7 (inherited terminal: '(') exp -> '(' . exp ')' (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' ')' } on NUM: shift, and go to state 2 on '-': shift, and go to state 3 on '(': shift, and go to state 7 on exp: shift, and go to state 15 State 8 (inherited terminal: '-') exp -> '-' exp . (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 13 _default_: reduce, using production 10: exp -> '-' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 10: exp -> '-' exp] State 9 (inherited terminal: '-') exp -> exp '-' . exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 2 on '-': shift, and go to state 3 on '(': shift, and go to state 7 on exp: shift, and go to state 16 State 10 (inherited terminal: '+') exp -> exp '+' . exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 2 on '-': shift, and go to state 3 on '(': shift, and go to state 7 on exp: shift, and go to state 17 State 11 (inherited terminal: '*') exp -> exp '*' . exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 2 on '-': shift, and go to state 3 on '(': shift, and go to state 7 on exp: shift, and go to state 18 State 12 (inherited terminal: '/') exp -> exp '/' . exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 2 on '-': shift, and go to state 3 on '(': shift, and go to state 7 on exp: shift, and go to state 19 State 13 (inherited terminal: '^') exp -> exp '^' . exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 2 on '-': shift, and go to state 3 on '(': shift, and go to state 7 on exp: shift, and go to state 20 State 14 (inherited terminal: '\n') line -> exp '\n' . (rule 4) Lookahead set { NUM '-' '\n' '(' } _default_: reduce, using production 4: line -> exp '\n' State 15 (inherited terminal: '(') exp -> '(' exp . ')' (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } exp -> exp . '+' exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '-' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '*' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '/' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '^' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' ')' } on '-': shift, and go to state 9 on '+': shift, and go to state 10 on '*': shift, and go to state 11 on '/': shift, and go to state 12 on '^': shift, and go to state 13 on ')': shift, and go to state 21 State 16 (inherited terminal: '-') exp -> exp '-' exp . (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '*': shift, and go to state 11 on '/': shift, and go to state 12 on '^': shift, and go to state 13 _default_: reduce, using production 7: exp -> exp '-' exp Actions suppressed by the default conflict resolution procedures: [on '*': reduce, using production 7: exp -> exp '-' exp] [on '/': reduce, using production 7: exp -> exp '-' exp] [on '^': reduce, using production 7: exp -> exp '-' exp] State 17 (inherited terminal: '+') exp -> exp '+' exp . (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '*': shift, and go to state 11 on '/': shift, and go to state 12 on '^': shift, and go to state 13 _default_: reduce, using production 6: exp -> exp '+' exp Actions suppressed by the default conflict resolution procedures: [on '*': reduce, using production 6: exp -> exp '+' exp] [on '/': reduce, using production 6: exp -> exp '+' exp] [on '^': reduce, using production 6: exp -> exp '+' exp] State 18 (inherited terminal: '*') exp -> exp '*' exp . (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 13 _default_: reduce, using production 8: exp -> exp '*' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 8: exp -> exp '*' exp] State 19 (inherited terminal: '/') exp -> exp '/' exp . (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 13 _default_: reduce, using production 9: exp -> exp '/' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 9: exp -> exp '/' exp] State 20 (inherited terminal: '^') exp -> exp '^' exp . (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } exp -> exp . '+' exp (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 13 _default_: reduce, using production 11: exp -> exp '^' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 11: exp -> exp '^' exp] State 21 (inherited terminal: ')') exp -> '(' exp ')' . (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } _default_: reduce, using production 12: exp -> '(' exp ')' bisonc++-4.13.01/documentation/manual/examples/exercises.yo0000644000175000017500000000211212633316117022555 0ustar frankfrankHere are some suggestions for you to consider to improve tt(mfcalc)'s implementation and operating mode: itemization( it() Add some additional functions from `tt(cmath)' to the tt(Parser::s_functions); it() Define a class tt(Symbol) in which the symbol type, and an appropriate value for the symbol is stored. Define only one map tt(d_symbols) in the Parser, and provide the tt(Symbol) class with means to obtain the appropriate values for the various token types. it() Remove the tt(%union) directive, and change it into tt(%stype Symbol). Hint: use the tt(%preinclude-header) directive to make tt(Symbol) known to the parser's base class. it() Define a token tt(CONST) for numerical constants (like tt(PI), (E)), and pre-define some numerical constants; it() Make the program report an error if the user refers to an uninitialized variable in any way except to store a value in it. Hints: use a tt(get()) and tt(set()) member pair in tt(Symbol), and use the appropriate member in the appropriate tt(expr) rule; use tt(ERROR()) to initiate error recovery. ) bisonc++-4.13.01/documentation/manual/examples/rpn.yo0000644000175000017500000000055612633316117021374 0ustar frankfrankThe first example is that of a simple double-precision reverse polish notation calculator (a calculator using postfix operators). This example provides a good starting point, since operator precedence is not an issue. The second example illustrates how operator precedence is handled. All sources for this calculator are found in the lurl(examples/rpn/) directory. bisonc++-4.13.01/documentation/manual/examples/rpngram.yo0000644000175000017500000000214612633316117022240 0ustar frankfrankHere are the grammar rules for the reverse polish notation calculator. verbinsert(//RULES)(rpn/parser/grammar) The groupings of the tt(rpn) `language' defined here are the expression (given the name tt(exp)), the line of input (tt(line)), and the complete input transcript (tt(input)). Each of these nonterminal symbols has several alternate rules, joined by the `tt(|)' punctuator which is read as the logical em(or). The following sections explain what these rules mean. The semantics of the language is determined by the actions taken when a grouping is recognized. The actions are the bf(C++) code that appears inside braces. See section ref(ACTIONS). You must specify these actions in bf(C++), but b() provides the means for passing semantic values between the rules. In each action, the pseudo-variable tt($$) represents the semantic value for the grouping that the rule is going to construct. Assigning a value to tt($$) is the main job of most actions. The semantic values of the components of the rule are referred to as tt($1) (the first component of a rule), tt($2) (the second component), and so on. bisonc++-4.13.01/documentation/manual/examples/mfcalc/0000755000175000017500000000000012633316117021443 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/mfcalc/build0000755000175000017500000000061412633316117022471 0ustar frankfrank#!/bin/bash case $1 in (clean) rm -f parser/[pP]arse* mfcalc cp parser/internalheader.hh parser/Parser.ih cp parser/parser.header parser/Parser.h ;; (mfcalc) cd parser bisonc++ -V -l grammar cd .. g++ -Wall -o mfcalc *.cc */*.cc ;; (*) echo "$0 [clean|mfcalc] to clean or build the mfcalc program" ;; esac bisonc++-4.13.01/documentation/manual/examples/mfcalc/mfcalc.h0000644000175000017500000000020712633316117023040 0ustar frankfrank#ifndef _INCLUDED_MFCALC_H_ #define _INCLUDED_MFCALC_H_ #include #include "parser/Parser.h" using namespace std; #endif bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/0000755000175000017500000000000012633316117022737 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/parser.header0000644000175000017500000000163512633316117025412 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // for error()'s inline implementation #include // for mfcalc's memory #include #include // $insert baseclass #include "Parserbase.h" #undef Parser class Parser: public ParserBase { typedef std::pair FunctionPair; std::map d_symbols; static std::map s_functions; static FunctionPair s_funTab[]; public: int parse(); private: void error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex int lex(); void print() // d_token, d_loc {} // support functions for parse(): void executeAction(int d_production); size_t errorRecovery(); int lookup(int token); int nextToken(); }; #endif bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/Parser.ih0000644000175000017500000000057512633316117024524 0ustar frankfrank // Include this file in the sources of the class Parser. // $insert class.h #include "Parser.h" // Add below here any includes etc. that are only // required for the compilation of Parser's sources. #include // UN-comment the next using-declaration if you want to use // symbols from the namespace std without specifying std:: using namespace std; bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/Parserbase.h0000644000175000017500000000371212633316117025202 0ustar frankfrank#ifndef ParserBase_h_included #define ParserBase_h_included #include #include namespace // anonymous { struct PI; } class ParserBase { public: // $insert tokens // Symbolic tokens: enum Tokens { NUM = 257, VAR, FNCT, NEG, }; // $insert STYPE struct STYPE { double u_val; double *u_symbol; double (*u_fun)(double); }; private: int d_stackIdx; std::vector d_stateStack; std::vector d_valueStack; protected: enum Return { PARSE_ACCEPT = 0, // values used as parse()'s return values PARSE_ABORT = 1 }; enum ErrorRecovery { DEFAULT_RECOVERY_MODE, UNEXPECTED_TOKEN, }; bool d_debug; size_t d_nErrors; int d_token; int d_nextToken; size_t d_state; STYPE *d_vsp; STYPE d_val; ParserBase(); void ABORT() const throw(Return); void ACCEPT() const throw(Return); void ERROR() const throw(ErrorRecovery); void checkEOF() const; void clearin(); bool debug() const; void pop(size_t count = 1); void push(size_t nextState); void reduce(PI const &productionInfo); size_t top() const; public: void setDebug(bool mode); }; inline bool ParserBase::debug() const { return d_debug; } inline void ParserBase::setDebug(bool mode) { d_debug = mode; } inline void ParserBase::ABORT() const throw(Return) { throw PARSE_ABORT; } inline void ParserBase::ACCEPT() const throw(Return) { throw PARSE_ACCEPT; } inline void ParserBase::ERROR() const throw(ErrorRecovery) { throw UNEXPECTED_TOKEN; } // As a convenience, when including ParserBase.h its symbols are available as // symbols in the class Parser, too. #define Parser ParserBase #endif bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/internalheader.ih0000644000175000017500000000057512633316117026255 0ustar frankfrank // Include this file in the sources of the class Parser. // $insert class.h #include "Parser.h" // Add below here any includes etc. that are only // required for the compilation of Parser's sources. #include // UN-comment the next using-declaration if you want to use // symbols from the namespace std without specifying std:: using namespace std; bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/grammar0000644000175000017500000000256512633316117024320 0ustar frankfrank%union { double u_val; double *u_symbol; double (*u_fun)(double); } %token NUM // Simple double precision number %token VAR // Variable %token FNCT // Function %type exp %right '=' %left '-' '+' %left '*' '/' %left NEG // negation--unary minus %right '^' // exponentiation %% //GRAMMAR input: // empty | input line ; //LINE line: '\n' | exp '\n' { cout << "\t" << $1 << endl; } | error '\n' ; //= exp: NUM | VAR { $$ = *$1; } | VAR '=' exp { $$ = *$1 = $3; } | FNCT '(' exp ')' { $$ = (*$1)($3); } | exp '+' exp { $$ = $1 + $3; } | exp '-' exp { $$ = $1 - $3; } | exp '*' exp { $$ = $1 * $3; } | exp '/' exp { $$ = $1 / $3; } | '-' exp %prec NEG { $$ = -$2; } | // Exponentiation: exp '^' exp { $$ = pow($1, $3); } | '(' exp ')' { $$ = $2; } ; //= bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/data.cc0000644000175000017500000000064412633316117024163 0ustar frankfrank#include "Parser.ih" Parser::FunctionPair Parser::s_funTab[] = { FunctionPair("sin", sin), FunctionPair("cos", cos), FunctionPair("atan", atan), FunctionPair("ln", log), FunctionPair("exp", exp), FunctionPair("sqrt", sqrt), }; map Parser::s_functions ( Parser::s_funTab, Parser::s_funTab + sizeof(Parser::s_funTab) / sizeof(Parser::FunctionPair) ); bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/grammar.rules0000644000175000017500000000172112633316117025442 0ustar frankfrankinput: // empty | input line ; line: '\n' | exp '\n' { cout << "\t" << $1 << endl; } | error '\n' ; exp: NUM | VAR { $$ = *$1; } | VAR '=' exp { $$ = *$1 = $3; } | FNCT '(' exp ')' { $$ = (*$1)($3); } | exp '+' exp { $$ = $1 + $3; } | exp '-' exp { $$ = $1 - $3; } | exp '*' exp { $$ = $1 * $3; } | exp '/' exp { $$ = $1 / $3; } | '-' exp %prec NEG { $$ = -$2; } | // Exponentiation: exp '^' exp { $$ = pow($1, $3); } | '(' exp ')' { $$ = $2; } ; bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/Parser.h0000644000175000017500000000211612633316117024344 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // added for mfcalc's memory #include #include // $insert baseclass #include "Parserbase.h" #undef Parser class Parser: public ParserBase { typedef std::pair FunctionPair; std::map d_symbols; static std::map s_functions; static FunctionPair s_funTab[]; public: int parse(); private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); }; inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline void Parser::print() // use d_token, d_loc {} #endif bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/lex.cc0000644000175000017500000000413012633316117024034 0ustar frankfrank#include "Parser.ih" /* Lexical scanner returns a double floating point number on the stack and the token NUM, or the ASCII character read if not a number. Skips all blanks and tabs, returns 0 for EOF. */ int Parser::lex() { char c; // get the next non-ws character while (cin.get(c) && c == ' ' || c == '\t') ; if (!cin) // no characters were obtained return 0; // indicate End Of Input if (c == '.' || isdigit (c)) // if a digit char was found { cin.putback(c); // return the character cin >> d_val.u_val; // extract a number return NUM; // return the NUM token } if (!isalpha(c)) // c doesn't start an identifier: return c; // return a single character token. // in all other cases, an ident is entered. Recognize a var or function string word; // store the name in a string object while (true) // process all alphanumerics: { word += c; // add 'm to `word' if (!cin.get(c)) // no more chars? then done here break; if (!isalnum(c)) // not an alphanumeric: put it back and done. { cin.putback(c); break; } } // Now lookup the name as a function's name map::iterator function = s_functions.find(word); // Got it, so return FPTR if (function != s_functions.end()) { d_val.u_fun = function->second; return FNCT; } // no function, so return a VAR. Set // u_symbol to the symbol's address in the // d_symbol map. The map will add the // symbol if not yet there. d_val.u_symbol = &d_symbols[word]; return VAR; } bisonc++-4.13.01/documentation/manual/examples/mfcalc/parser/parse.output0000644000175000017500000004321412633316117025337 0ustar frankfrank Production Rules: 1: input -> 2: input -> input line 3: line -> '\n' 4: line -> exp '\n' 5: line -> error '\n' 6: exp -> NUM 7: exp -> VAR 8: exp -> VAR '=' exp 9: exp -> FNCT '(' exp ')' 10: exp -> exp '+' exp 11: exp -> exp '-' exp 12: exp -> exp '*' exp 13: exp -> exp '/' exp 14: exp -> '-' exp 15: exp -> exp '^' exp 16: exp -> '(' exp ')' 17: input_$ -> input GRAMMAR STATES: State 0 input_$ -> . input (rule 17) Lookahead set { } All production rules (using dot == 0) of: input Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } on input: shift, and go to state 1 _default_: reduce, using production 1: input -> State 1 input_$ -> input . (rule 17) Lookahead set { } input -> input . line (rule 2) Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } line Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } on _error_: shift, and go to state 2 on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '\n': shift, and go to state 9 on '(': shift, and go to state 10 on exp: shift, and go to state 6 on line: shift, and go to state 8 State 2 (inherited terminal: _error_) line -> _error_ . '\n' (rule 5) Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } on '\n': shift, and go to state 11 State 3 (inherited terminal: NUM) exp -> NUM . (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } _default_: reduce, using production 6: exp -> NUM State 4 (inherited terminal: VAR) exp -> VAR . (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } exp -> VAR . '=' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } on '=': shift, and go to state 12 _default_: reduce, using production 7: exp -> VAR State 5 (inherited terminal: FNCT) exp -> FNCT . '(' exp ')' (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } on '(': shift, and go to state 13 State 6 exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' } line -> exp . '\n' (rule 4) Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } on '-': shift, and go to state 14 on '+': shift, and go to state 15 on '*': shift, and go to state 16 on '/': shift, and go to state 17 on '^': shift, and go to state 18 on '\n': shift, and go to state 19 State 7 (inherited terminal: '-') exp -> '-' . exp (rule 14) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 20 State 8 input -> input line . (rule 2) Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } _default_: reduce, using production 2: input -> input line State 9 (inherited terminal: '\n') line -> '\n' . (rule 3) Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } _default_: reduce, using production 3: line -> '\n' State 10 (inherited terminal: '(') exp -> '(' . exp ')' (rule 16) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' ')' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 21 State 11 (inherited terminal: '\n') line -> _error_ '\n' . (rule 5) Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } _default_: reduce, using production 5: line -> error '\n' State 12 (inherited terminal: '=') exp -> VAR '=' . exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 22 State 13 (inherited terminal: '(') exp -> FNCT '(' . exp ')' (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' ')' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 23 State 14 (inherited terminal: '-') exp -> exp '-' . exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 24 State 15 (inherited terminal: '+') exp -> exp '+' . exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 25 State 16 (inherited terminal: '*') exp -> exp '*' . exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 26 State 17 (inherited terminal: '/') exp -> exp '/' . exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 27 State 18 (inherited terminal: '^') exp -> exp '^' . exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on VAR: shift, and go to state 4 on FNCT: shift, and go to state 5 on '-': shift, and go to state 7 on '(': shift, and go to state 10 on exp: shift, and go to state 28 State 19 (inherited terminal: '\n') line -> exp '\n' . (rule 4) Lookahead set { _error_ NUM VAR FNCT '-' '\n' '(' } _default_: reduce, using production 4: line -> exp '\n' State 20 (inherited terminal: '-') exp -> '-' exp . (rule 14) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 18 _default_: reduce, using production 14: exp -> '-' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 14: exp -> '-' exp] State 21 (inherited terminal: '(') exp -> '(' exp . ')' (rule 16) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' ')' } on '-': shift, and go to state 14 on '+': shift, and go to state 15 on '*': shift, and go to state 16 on '/': shift, and go to state 17 on '^': shift, and go to state 18 on ')': shift, and go to state 29 State 22 (inherited terminal: '=') exp -> VAR '=' exp . (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '-': shift, and go to state 14 on '+': shift, and go to state 15 on '*': shift, and go to state 16 on '/': shift, and go to state 17 on '^': shift, and go to state 18 _default_: reduce, using production 8: exp -> VAR '=' exp Actions suppressed by the default conflict resolution procedures: [on '-': reduce, using production 8: exp -> VAR '=' exp] [on '+': reduce, using production 8: exp -> VAR '=' exp] [on '*': reduce, using production 8: exp -> VAR '=' exp] [on '/': reduce, using production 8: exp -> VAR '=' exp] [on '^': reduce, using production 8: exp -> VAR '=' exp] State 23 (inherited terminal: '(') exp -> FNCT '(' exp . ')' (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' ')' } on '-': shift, and go to state 14 on '+': shift, and go to state 15 on '*': shift, and go to state 16 on '/': shift, and go to state 17 on '^': shift, and go to state 18 on ')': shift, and go to state 30 State 24 (inherited terminal: '-') exp -> exp '-' exp . (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '*': shift, and go to state 16 on '/': shift, and go to state 17 on '^': shift, and go to state 18 _default_: reduce, using production 11: exp -> exp '-' exp Actions suppressed by the default conflict resolution procedures: [on '*': reduce, using production 11: exp -> exp '-' exp] [on '/': reduce, using production 11: exp -> exp '-' exp] [on '^': reduce, using production 11: exp -> exp '-' exp] State 25 (inherited terminal: '+') exp -> exp '+' exp . (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '*': shift, and go to state 16 on '/': shift, and go to state 17 on '^': shift, and go to state 18 _default_: reduce, using production 10: exp -> exp '+' exp Actions suppressed by the default conflict resolution procedures: [on '*': reduce, using production 10: exp -> exp '+' exp] [on '/': reduce, using production 10: exp -> exp '+' exp] [on '^': reduce, using production 10: exp -> exp '+' exp] State 26 (inherited terminal: '*') exp -> exp '*' exp . (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 18 _default_: reduce, using production 12: exp -> exp '*' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 12: exp -> exp '*' exp] State 27 (inherited terminal: '/') exp -> exp '/' exp . (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 18 _default_: reduce, using production 13: exp -> exp '/' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 13: exp -> exp '/' exp] State 28 (inherited terminal: '^') exp -> exp '^' exp . (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } exp -> exp . '+' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 15) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 18 _default_: reduce, using production 15: exp -> exp '^' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 15: exp -> exp '^' exp] State 29 (inherited terminal: ')') exp -> '(' exp ')' . (rule 16) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } _default_: reduce, using production 16: exp -> '(' exp ')' State 30 (inherited terminal: ')') exp -> FNCT '(' exp ')' . (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } _default_: reduce, using production 9: exp -> FNCT '(' exp ')' bisonc++-4.13.01/documentation/manual/examples/mfcalc/mfcalc.cc0000644000175000017500000000021112633316117023171 0ustar frankfrank/* mfcalc.cc */ #include "mfcalc.h" int main() { Parser parser; parser.parse(); return 0; } bisonc++-4.13.01/documentation/manual/examples/rpnmain.yo0000644000175000017500000000042712633316117022236 0ustar frankfrankIn keeping with the spirit of this example, the controlling function tt(main()) is kept to the bare minimum. The only requirement is that it constructs a parser object and then calls its parsing function tt(parse()) to start the process of parsing: verbinclude(rpn/rpn.cc) bisonc++-4.13.01/documentation/manual/examples/rpnerror.yo0000644000175000017500000000150712633316117022443 0ustar frankfrankWhen tt(parse()) detects a em(syntax error), it calls the error reporting member function tt(error()) to print an error message (usually but not always em(parse error)). It is up to the programmer to supply an implementation, but a very bland and simple in-line implementation is provided by b() in the class header file (see chapter ref(INTERFACE)). This default implementation is acceptable for tt(rpn). Once tt(error()) returns, the b() parser may recover from the error and continue parsing if the grammar contains a suitable error rule (see chapter ref(RECOVERY)). Otherwise, the parsing function tt(parse()) returns nonzero. Not any error rules were included in this example, so any invalid input causes the calculator program to exit. This is not clean behavior for a real calculator, but it is adequate for this first example. bisonc++-4.13.01/documentation/manual/examples/mfdecl.yo0000644000175000017500000000507312633316117022026 0ustar frankfrankThe grammar specification file for the tt(mfcalc) calculator allows us to introduce several new features. Here is the b() directive section for the tt(mfcalc) multi-function calculator (line numbers were added for referential purposes, they are not part of the declaraction section as used in the actual grammar file): verb( 1 %union 2 { 3 double u_val; 4 double *u_symbol; 5 double (*u_fun)(double); 6 } 7 8 %token NUM // Simple double precision number 9 %token VAR // Variable 10 %token FNCT // Function 11 %type exp 12 13 %right '=' 14 %left '-' '+' 15 %left '*' '/' 16 %left NEG // negation--unary minus 17 %right '^' // exponentiation ) The above grammar introduces only two new features of the Bison language. These features allow semantic values to have various data types The tt(%union) directive given in lines 1 until 6 allow semantic values to have various data types (see section ref(MORETYPES)). The tt(%union) directive is used instead of tt(%stype), and defines the type tt(Parser::STYPE__) as the indicated union: all semantic values now have this tt(Parser::STYPE__) type. As defined here the allowable types are now itemization( itt(double) (for tt(exp) and tt(NUM)); it() a em(pointer) to a tt(double), being a pointer to entries in tt(mfcalc)'s symbol table, used with tt(VAR) tokens (see section ref(UNION)). it() a em(pointer to a function) expecting a tt(double) argument and returning a tt(double) value, used with tt(FNCT) tokens. ) Since values can now have various types, it is necessary to associate a type with each grammar symbol whose semantic value is used. These symbols are tt(NUM), tt(VAR), tt(FNCT), and tt(exp). Their declarations are augmented with information about their data type (placed between angle brackets). The Bison construct tt(%type) (line 12) is used for declaring nonterminal symbols, just as tt(%token) is used for declaring token types. We have not used tt(%type) before because nonterminal symbols are normally declared implicitly by the rules that define them. But tt(exp) must be declared explicitly so we can specify its value type. See also section ref(TYPE). Finally note the em(right associative) operator `tt(=)', defined in line 13: by making the assignment operator right-associative we can allow em(sequential assignments) of the form tt(a = b = c = expression). bisonc++-4.13.01/documentation/manual/examples/mftables.yo0000644000175000017500000000355212633316117022371 0ustar frankfrankThe multi-function calculator requires a symbol table to keep track of the names and meanings of variables and functions. This doesn't affect the grammar rules (except for the actions) or the b() directives, but it requires some additional bf(C++) types for support as well as several additional data members for the parser class. The symbol table itself varies in size and contents once tt(mfcalc) is used, and if a program uses multiple parser objects (well...) each parser will require its own symbol table. Therefore it is defined as a em(data member) tt(d_symbols) in the Parser's header file. In contrast, the em(function table) has a em(fixed) size and contents. Because of this, multiple parser objects (if defined) could share the same function table, and so the function table is defined as a em(static) data member. Both tables profitably use the tt(std::map) container data type that is available in bf(C++): their keys are tt(std::string) objects, their values, respecively, tt(double)s and tt(double (*)(double))s. Here is the declaration of tt(d_symbols) and tt(s_functions) as used in tt(mfcalc)'s parser: verb( std::map d_symbols; static std::map s_functions; ) As tt(s_functions) is a static member, it can be initialized em(compile time) from an em(array of pairs). To ease the definition of such an array a tt(private typedef) verb( typedef std::pair FunctionPair; ) is added to the parser class, as well as a private array verb( static FunctionPair s_funTab[]; ) These definitions allow us to initialize tt(s_functions) in a separate source file (tt(data.cc)): verbinclude(mfcalc/parser/data.cc) By simply editing the definition of tt(s_funTab), additional functions can be added to the calculator. bisonc++-4.13.01/documentation/manual/examples/rpnconstruct.yo0000644000175000017500000000162412633316117023336 0ustar frankfrankHere is how to compile and run the parser file: verb( # List files (recursively) in the (current) examples/rpn directory. % ls -R .: build parser rpn.cc rpn.h ./parser: grammar lex.cc # Create `rpn' using the `build' script: % ./build rpn # List files again, ./rpn is the constructed program % ls -R .: build parser rpn rpn.cc rpn.h ./parser: Parser.h Parser.ih Parserbase.h grammar lex.cc parse.cc parse.output ) Here is an example session using tt(rpn): verb( % rpn 4 9 + 13 3 7 + 3 4 5 *+- -13 3 7 + 3 4 5 * + - n Note the unary minus, `n' 13 5 6 / 4 n + -3.16667 3 4 ^ Exponentiation 81 ^D End-of-file indicator % ) bisonc++-4.13.01/documentation/manual/examples/rpndecl.yo0000644000175000017500000000156112633316117022221 0ustar frankfrankHere are the bf(C++) and b() directives for the reverse polish notation calculator. As in bf(C++), end-of-line comments may be used. verbinsert(//DECL)(rpn/parser/grammar) The directive section provides information to b() about the token types (see section The b() Declarations Section). Each terminal symbol that is not a single-character literal must be declared here (Single-character literals normally don't need to be declared). In this example, all the arithmetic operators are designated by single-character literals, so the only terminal symbol that needs to be declared is tt(NUM), the token type for numeric constants. Since b() uses the type tt(int) as the default semantic value type, one additional directive is required to inform b() about the fact that we are using tt(double) values. The tt(%stype) (semantic value type) directive is used to do so. bisonc++-4.13.01/documentation/manual/examples/rpnline.yo0000644000175000017500000000204212633316117022234 0ustar frankfrankNow consider the definition of line: verb( line: '\n' | exp '\n' { cout << "\t" << $1 << endl; } ; ) The first alternative is a token which is a newline character; this means that tt(rpn) accepts a blank line (and ignores it, since there is no action). The second alternative is an expression followed by a newline. This is the alternative that makes tt(rpn) useful. The semantic value of the tt(exp) grouping is the value of tt($1) because the tt(exp) in question is the first symbol in the rule's alternative. The action prints this value, which is the result of the computation the user asked for. This action is unusual because it does not assign a value to tt($$). As a consequence, the semantic value associated with the line is not initialized (so its value will be unpredictable). This would be a bug if that value were ever used, but we don't use it: once tt(rpn) has printed the value of the user's input line, that value is no longer needed. bisonc++-4.13.01/documentation/manual/examples/errorcalc/0000755000175000017500000000000012633316117022172 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/errorcalc/calc.h0000644000175000017500000000020312633316117023240 0ustar frankfrank#ifndef _INCLUDED_CALC_H_ #define _INCLUDED_CALC_H_ #include #include "parser/Parser.h" using namespace std; #endif bisonc++-4.13.01/documentation/manual/examples/errorcalc/build0000755000175000017500000000043212633316117023216 0ustar frankfrank#!/bin/bash case $1 in (clean) rm parser/[pP]arse* calc ;; (calc) cd parser bisonc++ -V -l grammar cd .. g++ -Wall -o calc *.cc */*.cc ;; (*) echo "$0 [clean|calc] to clean or build the calc program" ;; esac bisonc++-4.13.01/documentation/manual/examples/errorcalc/calc.cc0000644000175000017500000000020512633316117023400 0ustar frankfrank/* calc.cc */ #include "calc.h" int main() { Parser parser; parser.parse(); return 0; } bisonc++-4.13.01/documentation/manual/examples/errorcalc/parser/0000755000175000017500000000000012633316117023466 5ustar frankfrankbisonc++-4.13.01/documentation/manual/examples/errorcalc/parser/Parser.ih0000644000175000017500000000056012633316117025245 0ustar frankfrank // Include this file in the sources of the class Parser. // $insert class.h #include "Parser.h" // Add below here any includes etc. that are only // required for the compilation of Parser's sources. // UN-comment the next using-declaration if you want to use // symbols from the namespace std without specifying std:: //using namespace std; bisonc++-4.13.01/documentation/manual/examples/errorcalc/parser/Parserbase.h0000644000175000017500000000300612633316117025725 0ustar frankfrank#ifndef ParserBase_h_included #define ParserBase_h_included #include namespace // anonymous { struct PI; } class ParserBase { public: // $insert tokens // Symbolic tokens: enum Tokens { NUM = 260, NEG, }; // $insert STYPE typedef double STYPE; private: int d_stackIdx; std::vector d_stateStack; std::vector d_valueStack; protected: enum Return { PARSE_ACCEPT = 0, // values used as parse()'s return values PARSE_ABORT = 1 }; enum ErrorRecovery { DEFAULT_RECOVERY_MODE, UNEXPECTED_TOKEN, }; bool d_debug; size_t d_nErrors; int d_token; size_t d_state; STYPE *d_vsp; STYPE d_val; ParserBase(); void ABORT() const throw(Return); void ACCEPT() const throw(Return); void ERROR() const throw(ErrorRecovery); void clearin(); bool debug() const { return d_debug; } void pop(size_t count = 1); void push(size_t nextState); size_t reduce(PI const &productionInfo); void setDebug(bool mode) { d_debug = mode; } size_t top() const; // class ParserBase ends }; // As a convenience, when including ParserBase.h its symbols are available as // symbols in the class Parser, too. #define Parser ParserBase #endif bisonc++-4.13.01/documentation/manual/examples/errorcalc/parser/grammar0000644000175000017500000000160312633316117025037 0ustar frankfrank%stype double %token NUM %left '-' '+' %left '*' '/' %left NEG // negation--unary minus %right '^' // exponentiation %% input: // empty | input line ; //LINE line: '\n' | exp '\n' { std::cout << "\t" << $1 << '\n'; } | error '\n' ; //= exp: NUM | exp '+' exp { $$ = $1 + $3; } | exp '-' exp { $$ = $1 - $3; } | exp '*' exp { $$ = $1 * $3; } | exp '/' exp { $$ = $1 / $3; } | '-' exp %prec NEG { $$ = -$2; } | // Exponentiation: exp '^' exp { $$ = pow($1, $3); } | '(' exp ')' { $$ = $2; } ; bisonc++-4.13.01/documentation/manual/examples/errorcalc/parser/Parser.h0000644000175000017500000000120612633316117025072 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // for error()'s inline implementation #include // $insert baseclass #include "Parserbase.h" #undef Parser class Parser: public ParserBase { public: int parse(); private: void error(char const *msg) { std::cerr << msg << std::endl; } // $insert lex int lex(); void print() // d_token, d_loc {} // support functions for parse(): void executeAction(int d_production); size_t errorRecovery(); int lookup(int token); int nextToken(); }; #endif bisonc++-4.13.01/documentation/manual/examples/errorcalc/parser/lex.cc0000644000175000017500000000154612633316117024573 0ustar frankfrank#include "Parser.ih" /* Lexical scanner returns a double floating point number on the stack and the token NUM, or the ASCII character read if not a number. Skips all blanks and tabs, returns 0 for EOF. */ int Parser::lex() { char c; // get the next non-ws character while (std::cin.get(c) && c == ' ' || c == '\t') ; if (!std::cin) // no characters were obtained return 0; // indicate End Of Input if (c == '.' || isdigit (c)) // if a digit char was found { std::cin.putback(c); // return the character std::cin >> d_val; // extract a number return NUM; // return the NUM token } return c; // otherwise return the extracted char. } bisonc++-4.13.01/documentation/manual/examples/errorcalc/parser/parse.output0000644000175000017500000003160212633316117026064 0ustar frankfrank Production Rules: 1: input -> 2: input -> input line 3: line -> '\n' 4: line -> exp '\n' 5: line -> error '\n' 6: exp -> NUM 7: exp -> exp '+' exp 8: exp -> exp '-' exp 9: exp -> exp '*' exp 10: exp -> exp '/' exp 11: exp -> '-' exp 12: exp -> exp '^' exp 13: exp -> '(' exp ')' 14: input_$ -> input GRAMMAR STATES: State 0 input_$ -> . input (rule 14) Lookahead set { } All production rules (using dot == 0) of: input Lookahead set { _error_ NUM '-' '\n' '(' } on input: shift, and go to state 1 _default_: reduce, using production 1: input -> State 1 input_$ -> input . (rule 14) Lookahead set { } input -> input . line (rule 2) Lookahead set { _error_ NUM '-' '\n' '(' } All production rules (using dot == 0) of: line Lookahead set { _error_ NUM '-' '\n' '(' } exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on _error_: shift, and go to state 2 on NUM: shift, and go to state 3 on '-': shift, and go to state 4 on '\n': shift, and go to state 6 on '(': shift, and go to state 8 on line: shift, and go to state 5 on exp: shift, and go to state 7 State 2 (inherited terminal: _error_) line -> _error_ . '\n' (rule 5) Lookahead set { _error_ NUM '-' '\n' '(' } on '\n': shift, and go to state 9 State 3 (inherited terminal: NUM) exp -> NUM . (rule 6) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } _default_: reduce, using production 6: exp -> NUM State 4 (inherited terminal: '-') exp -> '-' . exp (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on '-': shift, and go to state 4 on '(': shift, and go to state 8 on exp: shift, and go to state 10 State 5 input -> input line . (rule 2) Lookahead set { _error_ NUM '-' '\n' '(' } _default_: reduce, using production 2: input -> input line State 6 (inherited terminal: '\n') line -> '\n' . (rule 3) Lookahead set { _error_ NUM '-' '\n' '(' } _default_: reduce, using production 3: line -> '\n' State 7 line -> exp . '\n' (rule 4) Lookahead set { _error_ NUM '-' '\n' '(' } exp -> exp . '+' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '-': shift, and go to state 11 on '+': shift, and go to state 12 on '*': shift, and go to state 13 on '/': shift, and go to state 14 on '^': shift, and go to state 15 on '\n': shift, and go to state 16 State 8 (inherited terminal: '(') exp -> '(' . exp ')' (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' ')' } on NUM: shift, and go to state 3 on '-': shift, and go to state 4 on '(': shift, and go to state 8 on exp: shift, and go to state 17 State 9 (inherited terminal: '\n') line -> _error_ '\n' . (rule 5) Lookahead set { _error_ NUM '-' '\n' '(' } _default_: reduce, using production 5: line -> error '\n' State 10 (inherited terminal: '-') exp -> '-' exp . (rule 11) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 15 _default_: reduce, using production 11: exp -> '-' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 11: exp -> '-' exp] State 11 (inherited terminal: '-') exp -> exp '-' . exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on '-': shift, and go to state 4 on '(': shift, and go to state 8 on exp: shift, and go to state 18 State 12 (inherited terminal: '+') exp -> exp '+' . exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on '-': shift, and go to state 4 on '(': shift, and go to state 8 on exp: shift, and go to state 19 State 13 (inherited terminal: '*') exp -> exp '*' . exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on '-': shift, and go to state 4 on '(': shift, and go to state 8 on exp: shift, and go to state 20 State 14 (inherited terminal: '/') exp -> exp '/' . exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on '-': shift, and go to state 4 on '(': shift, and go to state 8 on exp: shift, and go to state 21 State 15 (inherited terminal: '^') exp -> exp '^' . exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } All production rules (using dot == 0) of: exp Lookahead set { '-' '+' '*' '/' '^' '\n' } on NUM: shift, and go to state 3 on '-': shift, and go to state 4 on '(': shift, and go to state 8 on exp: shift, and go to state 22 State 16 (inherited terminal: '\n') line -> exp '\n' . (rule 4) Lookahead set { _error_ NUM '-' '\n' '(' } _default_: reduce, using production 4: line -> exp '\n' State 17 (inherited terminal: '(') exp -> '(' exp . ')' (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } exp -> exp . '+' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '-' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '*' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '/' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' ')' } exp -> exp . '^' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' ')' } on '-': shift, and go to state 11 on '+': shift, and go to state 12 on '*': shift, and go to state 13 on '/': shift, and go to state 14 on '^': shift, and go to state 15 on ')': shift, and go to state 23 State 18 (inherited terminal: '-') exp -> exp '-' exp . (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '*': shift, and go to state 13 on '/': shift, and go to state 14 on '^': shift, and go to state 15 _default_: reduce, using production 8: exp -> exp '-' exp Actions suppressed by the default conflict resolution procedures: [on '*': reduce, using production 8: exp -> exp '-' exp] [on '/': reduce, using production 8: exp -> exp '-' exp] [on '^': reduce, using production 8: exp -> exp '-' exp] State 19 (inherited terminal: '+') exp -> exp '+' exp . (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '*': shift, and go to state 13 on '/': shift, and go to state 14 on '^': shift, and go to state 15 _default_: reduce, using production 7: exp -> exp '+' exp Actions suppressed by the default conflict resolution procedures: [on '*': reduce, using production 7: exp -> exp '+' exp] [on '/': reduce, using production 7: exp -> exp '+' exp] [on '^': reduce, using production 7: exp -> exp '+' exp] State 20 (inherited terminal: '*') exp -> exp '*' exp . (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 15 _default_: reduce, using production 9: exp -> exp '*' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 9: exp -> exp '*' exp] State 21 (inherited terminal: '/') exp -> exp '/' exp . (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '+' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 15 _default_: reduce, using production 10: exp -> exp '/' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 10: exp -> exp '/' exp] State 22 (inherited terminal: '^') exp -> exp '^' exp . (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } exp -> exp . '+' exp (rule 7) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '-' exp (rule 8) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '*' exp (rule 9) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '/' exp (rule 10) Lookahead set { '-' '+' '*' '/' '^' '\n' } exp -> exp . '^' exp (rule 12) Lookahead set { '-' '+' '*' '/' '^' '\n' } on '^': shift, and go to state 15 _default_: reduce, using production 12: exp -> exp '^' exp Actions suppressed by the default conflict resolution procedures: [on '^': reduce, using production 12: exp -> exp '^' exp] State 23 (inherited terminal: ')') exp -> '(' exp ')' . (rule 13) Lookahead set { '-' '+' '*' '/' '^' '\n' ')' } _default_: reduce, using production 13: exp -> '(' exp ')' bisonc++-4.13.01/documentation/manual/examples/errors.yo0000644000175000017500000000367012633316117022111 0ustar frankfrankUp to this point, this manual has not addressed the issue of error recovery, i.e., how to continue parsing after the parser detects a syntax error. All that's been handled so far is error reporting using the tt(error()) member function with yyerror. Recall that by default tt(parse()) returns after calling tt(error()). This means that an erroneous input line causes the calculator program to exit. Now we show how to rectify this deficiency. The b() language itself includes the reserved word tt(error), which may be included in the grammar rules. In the example below it has been added as one more alternatives for line: verbinsert(//LINE)(errorcalc/parser/grammar) This addition to the grammar allows for simple error recovery in the event of a parse error. If an expression that cannot be evaluated is read, the error is recognized by the third rule for line, and parsing continues (The tt(error()) member function is still called upon to print its message as well). Different from the implementation of tt(error) in Bison, b() proceeds on the assumption that whenever tt(error) is used in a rule it is the grammar constructor's intention to have the parser continue parsing. Therefore, a statement like `tt(yyerrok;)' seen in Bison grammars is superfluous in b() grammars. The reserved keyword tt(error) itself causes the parsing function to skip all subsequent input until a possible token following tt(error) is seen. In the above implementation that token would be the newline character `tt(\n)' (see chapter ref(RECOVERY)). This form of error recovery deals with syntax errors. There are other kinds of errors; for example, divisions by zero, which raises an exception signal that is normally fatal. A real calculator program must handle this signal and use whatever it takes to discard the rest of the current line of input and resume parsing thereafter. This extensive error handling is not discussed here, as it is not specific to b() programs. bisonc++-4.13.01/documentation/manual/examples/rpnexpr.yo0000644000175000017500000000357012633316117022272 0ustar frankfrankThe tt(exp) grouping has several rules, one for each kind of expression. The first rule handles the simplest expressions: those that are just numbers. The second handles an addition-expression, which looks like two expressions followed by a plus-sign. The third handles subtraction, and so on. verb( exp: NUM | exp exp '+' { $$ = $1 + $2; } | exp exp '-' { $$ = $1 - $2; } ... ) It is customary to use `tt(|)' to join all the rules for exp, but the rules could equally well have been written separately: verb( exp: NUM ; exp: exp exp '+' { $$ = $1 + $2; } ; exp: exp exp '-' { $$ = $1 - $2; } ; ... ) Most of the rules have actions that compute the value of the expression in terms of the values of its parts. For example, in the rule for addition, tt($1) refers to the first component tt(exp) and tt($2) refers to the second one. The third component, 'tt(+)', has no meaningful associated semantic value, but if it had one you could refer to it as tt($3). When the parser's parsing function tt(parse()) recognizes a sum expression using this rule, the sum of the two subexpressions' values is produced as the value of the entire expression. See section ref(ACTIONS). You don't have to give an action for every rule. When a rule has no action, Bison by default copies the value of tt($1) into tt($$). This is what happens in the first rule (the one that uses tt(NUM)). The formatting shown here is the recommended convention, but Bison does not require it. You can add or change whitespace as much as you wish. bisonc++-4.13.01/documentation/manual/algorithm.yo0000644000175000017500000000262712633316117020746 0ustar frankfrankincludefile(algorithm/intro.yo) subsect(The FIRST Sets) includefile(algorithm/first.yo) subsect(The States) includefile(algorithm/states.yo) lsubsect(LOOKAHEAD)(The Look-ahead Sets) includefile(algorithm/lookaheads.yo) subsubsect(The look-ahead token) includefile(algorithm/lookahead.yo) subsubsect(How look-ahead sets are determined) includefile(algorithm/determine.yo) subsect(The Final Transition Tables) subsubsect(Preamble) includefile(algorithm/pstates.yo) includefile(algorithm/transition.yo) lsubsect(PARSING)(Processing Input) includefile(algorithm/input.yo) lsect(SHIFTREDUCE)(Shift/Reduce Conflicts) includefile(algorithm/conflicts.yo) sect(Operator Precedence) includefile(algorithm/precedence.yo) subsect(When Precedence is Needed) includefile(algorithm/whenprec.yo) subsect(Specifying Operator Precedence) includefile(algorithm/specifying.yo) subsect(Precedence Examples) includefile(algorithm/precdemos.yo) subsect(How Precedence Works) includefile(algorithm/howprec.yo) subsect(Rule precedence) includefile(algorithm/ruleprec.yo) lsect(CONDEP)(Context-Dependent Precedence) includefile(algorithm/condep.yo) sect(Reduce/Reduce Conflicts) includefile(algorithm/reduce.yo) lsect(MYSTERIOUS)(Mysterious Reduce/Reduce Conflicts) includefile(algorithm/mysterious.yo) bisonc++-4.13.01/documentation/manual/grammar.yo0000644000175000017500000001454012633316117020403 0ustar frankfrankincludefile(grammar/intro.yo) lsect(OUTLINE)(Outline of a Bisonc++ Grammar File) includefile(grammar/outline.yo) lsect(SYMBOLS)(Symbols, Terminal and Nonterminal Symbols) includefile(grammar/symbols.yo) lsect(RULES)(Syntax of Grammar Rules) includefile(grammar/syntax.yo) lsect(RECURSIVE)(Writing recursive rules) includefile(grammar/recursive.yo) lsect(DIRECTIVES)(Bisonc++ Directives) includefile(directives/intro.yo) subsect(%baseclass-preinclude: specifying a header included by the baseclass) includefile(directives/preinclude.yo) lsubsect(PARSERCLASS) (%class-name: defining the name of the parser class) includefile(directives/parserclass.yo) subsect(%debug: adding debugging code to the `parse()' member) includefile(directives/debug.yo) subsect(%error-verbose: dumping the parser's state stack) includefile(directives/errorverbose.yo) lsubsect(EXPECT)(%expect: suppressing conflict warnings) includefile(directives/expect.yo) subsect(%flex: using the traditional `flex++' interface) includefile(directives/flex.yo) lsubsect(INCLUDE)(%include: splitting the input file) includefile(directives/include.yo) lsubsect(PRECEDENCE)(%left, %right, %nonassoc: defining operator precedence) includefile(directives/precedence.yo) lsubsect(LOCSTRUCT) (%locationstruct: specifying a dedicated location struct) includefile(directives/locstruct.yo) lsubsect(LSPNEEDED)(%lsp-needed: using the default location type) includefile(directives/lneeded.yo) lsubsect(LTYPE)(%ltype: using an existing location type) includefile(directives/ltype.yo) subsect(%namespace: using a namespace) includefile(directives/namespace.yo) subsect(%negative-dollar-indices: using constructions like $-1) includefile(directives/negative.yo) subsect(%no-lines: suppressing `#line' directives) includefile(directives/lines.yo) subsect(%prec: overruling default precedences) includefile(directives/prec.yo) subsect(%polymorphic: using polymorphism to define multiple semantic values) includefile(directives/polymorphic.yo) subsect(%print-tokens: displaying tokens and matched text) includefile(directives/print.yo) subsect(%required-tokens: defining the minimum number of tokens between error reports) includefile(directives/required.yo) lsubsect(SCANNER)(%scanner: using a standard scanner interface) includefile(directives/scanner.yo) subsect(%scanner-matched-text-function: define the name of the scanner's member returning the matched texttoken) includefile(directives/scannermatchedtextfunction.yo) subsect(%scanner-token-function: define the name of the scanner's token function) includefile(directives/scannertokenfunction.yo) subsect(%start: defining the start rule) includefile(directives/start.yo) subsect(%stype: specifying the semantic stack type) includefile(directives/stype.yo) lsubsect(TOKTYPENAMES)(%token: defining token names) includefile(directives/tokens.yo) lsubsubsect(IMPROPER)(Improper token names) includefile(directives/improper.yo) lsubsect(TYPE)(%type: associating semantic values to (non)terminals) includefile(directives/nonterms.yo) lsubsect(UNION)(%union: using a 'union' to define multiple semantic values) includefile(directives/union.yo) subsect(%weak-tags: %polymorphic declaring 'enum Tag__') includefile(directives/weaktags.yo) subsect(Directives controlling the names of generated files) includefile(directives/output.yo) lsubsubsect(BCHEADER) (%baseclass-header: defining the parser's base class header) includefile(directives/baseclass.yo) lsubsubsect(CHEADER) (%class-header: defining the parser's class header) includefile(directives/classhdr.yo) lsubsubsect(FILES) (%filenames: specifying a generic filename) includefile(directives/filenames.yo) lsubsubsect(IHEADER) (%implementation-header: defining the implementation header) includefile(directives/imphdr.yo) lsubsubsect(PARSESOURCE) (%parsefun-source: defining the parse() function's sourcefile) includefile(directives/parse.yo) subsubsect(%target-directory: defining the directory where files must be written) includefile(directives/targetdir.yo) lsect(DEFSEM)(Defining Language Semantics) includefile(grammar/semantics.yo) lsubsect(SEMANTICTYPES)(Data Types of Semantic Values) includefile(grammar/datatypes.yo) lsubsect(MORETYPES)(More Than One Value Type) includefile(grammar/union.yo) lsubsect(POLYMORPHIC)(Polymorphism and multiple semantic values: `%polymorphic') includefile(grammar/polymorphic.yo) subsubsect(The %polymorphic directive) includefile(grammar/polymorphicdirective.yo) lsubsubsect(POLYTYPE) (The %polymorphic and %type: associating semantic values with (non-)terminals) includefile(grammar/polymorphictype.yo) subsubsect(Code generated by %polymorphic) includefile(grammar/code.yo) subsubsect(A parser using a polymorphic semantic value type) includefile(grammar/parser.yo) subsubsect(A scanner using a polymorphic semantic value type) includefile(grammar/scanner.yo) lsubsect(ACTIONS)(Actions) includefile(grammar/actions.yo) lsubsect(ACTIONTYPES)(Data Types of Values in Actions) includefile(grammar/actiontypes.yo) lsubsect(MIDACTIONS)(Actions in Mid-Rule) includefile(grammar/midrule.yo) sect(Basic Grammatical Constructions) includefile(grammar/gramcons.yo) subsect(Plain Alternatives) includefile(grammar/alternatives.yo) subsect(One Or More Alternatives, No Separators) includefile(grammar/series.yo) lsubsect(OPTSERIES)(Zero Or More Alternatives, No Separators) includefile(grammar/optseries.yo) subsect(One Or More Alternatives, Using Separators) includefile(grammar/delimseries.yo) subsect(Zero Or More Alternatives, Using Separators) includefile(grammar/optdelim.yo) subsect(Nested Blocks) includefile(grammar/nested.yo) sect(Multiple Parsers in the Same Program) includefile(grammar/multiple.yo) bisonc++-4.13.01/documentation/manual/algorithm/0000755000175000017500000000000012633316117020366 5ustar frankfrankbisonc++-4.13.01/documentation/manual/algorithm/conflicts.yo0000644000175000017500000001001412633316117022717 0ustar frankfrankSuppose we are parsing a language which has tt(if) and tt(if-else) statements, with a pair of rules like this: verb( if_stmt: IF '(' expr ')' stmt | IF '(' expr ')' stmt ELSE stmt ; ) Here we assume that tt(IF) and tt(ELSE) are terminal symbols for specific keywords, and that tt(expr) and tt(stmnt) are defined non-terminals. When the tt(ELSE) token is read and becomes the look-ahead token, the contents of the stack (assuming the input is valid) are just right for em(reduction) by the first rule. But it is also legitimate to em(shift) the tt(ELSE), because that would lead to eventual reduction by the second rule. This situation, where either a shift or a reduction would be valid, is called a tt(shift/reduce) conflict. B() is designed to resolve these conflicts by em(implementing) a shift, unless otherwise directed by operator precedence declarations. To see the reason for this, let's contrast it with the other alternative. Since the parser prefers to shift the tt(ELSE), the result is to attach the em(else-clause) to the innermost if-statement, making these two inputs equivalent: verb( if (x) if (y) then win(); else lose(); if (x) { if (y) then win(); else lose(); } ) But if the parser would perform a em(reduction) whenever possible rather than a em(shift), the result would be to attach the em(else-clause) to the outermost if-statement, making these two inputs equivalent: verb( if (x) if (y) then win(); else lose(); if (x) { if (y) win(); } else lose(); ) The conflict exists because the grammar as written is em(ambiguous): em(either) parsing of the simple nested if-statement is legitimate. The established convention is that these ambiguities are resolved by attaching the else-clause to the innermost if-statement; this is what b() accomplishes by implementing a shift rather than a reduce. This particular ambiguity was first encountered in the specifications of Algol 60 and is called the em(dangling else) ambiguity. To avoid warnings from b() about predictable, legitimate shift/reduce conflicts, use the tt(%expect n) directive. There will be no warning as long as the number of shift/reduce conflicts is exactly tt(n). See section ref(EXPECT). The definition of tt(if_stmt) above is solely to blame for the conflict, but the plain tt(stmnt) rule, consisting of two recursive alternatives will of course never be able to match actual input, since there's no way for the grammar to eventually derive a sentence this way. Adding one non-recursive alternative is enough to convert the grammar into one that em(does) derive sentences. Here is a complete b() input file that actually shows the conflict: verbinclude(examples/dangling) Looking again at the dangling else problem note that there are multiple ways to handle tt(stmnt) productions. Depending on the particular input that is provided it could either be reduced to a tt(stmt) or the parser could continue to consume input by processing an tt(ELSE) token, eventually resulting in the recognition of tt(IF '(' VAR ')' stmt ELSE stmt) as a tt(stmt). There is little we can do but resorting to tt(%expect) to handle the dangling else problem. The default handling is what most people intuitively expect and so in this case using tt(%expect 1) is an easy way to prevent b() from reporting a shift/reduce conflict. But shift/reduce conflicts are most often solved by specifying disambiguating rules specifying priorities or associations, usually in the context of arithmetic expressions, as discussed in the next sections. However, shift-reduce conflicts can also be observed in grammars where a state contains items that could be reduced to a certain non-terminal and items in which a shift is possible in an item of a production rule of a completely different non-terminal. Here is an example of such a grammar: verbinclude(examples/peculiar) Why these grammars show shift reduce conflicts and how these are solved is discussed in the next section. bisonc++-4.13.01/documentation/manual/algorithm/condep.yo0000644000175000017500000000306212633316117022210 0ustar frankfrankOften the precedence of an operator depends on the context. This sounds outlandish at first, but it is really very common. For example, a minus sign typically has a very high precedence as a unary operator, and a somewhat lower precedence (lower than multiplication) as a binary operator. The b() precedence directives, %left, %right and %nonassoc, can only be used once for a given token; so a token has only one precedence declared in this way. For context-dependent precedence, you need to use an additional mechanism: the %prec modifier for rules. The %prec modifier declares the precedence of a particular rule by specifying a terminal symbol whose precedence should be used for that rule. It's not necessary for that symbol to appear otherwise in the rule. The modifier's syntax is: verb(%prec terminal-symbol) and it is written after the components of the rule. Its effect is to assign the rule the precedence of terminal-symbol, overriding the precedence that would be deduced for it in the ordinary way. The altered rule precedence then affects how conflicts involving that rule are resolved (see section Operator Precedence). Here is how %prec solves the problem of unary minus. First, declare a precedence for a fictitious terminal symbol named UMINUS. There are no tokens of this type, but the symbol serves to stand for its precedence: verb( ... %left '+' '-' %left '*' %left UMINUS ) Now the precedence of UMINUS can be used in specific rules: verb( exp: ... | exp '-' exp ... | '-' exp %prec UMINUS ) bisonc++-4.13.01/documentation/manual/algorithm/specifying.yo0000644000175000017500000000163112633316117023100 0ustar frankfrankB() allows you to specify these choices with the operator precedence directives tt(%left) and tt(%right). Each such directive contains a list of tokens, which are operators whose precedence and associativity is being declared. The tt(%left) directive makes all those operators left-associative and the tt(%right) directive makes them right-associative. A third alternative is tt(%nonassoc), which declares that it is a syntax error to find the same operator twice `in a row'. Actually, tt(%nonassoc) is not currently (0.98.004) punished that way by b(). Instead, tt(%nonassoc) and tt(%left) are handled identically. The relative precedence of different operators is controlled by the order in which they are declared. The first tt(%left) or tt(%right) directive in the file declares the operators whose precedence is lowest, the next such directive declares the operators whose precedence is a little higher, and so on. bisonc++-4.13.01/documentation/manual/algorithm/determine.yo0000644000175000017500000002006212633316117022713 0ustar frankfrankOnce the items of all the grammar's states have been determined the LA sets for the states' items are computed. Starting from the LA set of the kernel item of state 0 (representing the augmented grammar's production rule tt(S_$: . S), where tt(S) is the grammar's start rule) the LA sets of all items of all of the grammar's states are determined. By definition, the LA set of state 0's kernel item equals tt($), representing end-of-file. Starting from the function tt(State::determineLAsets), which is called for state 0, the LA sets of all items of all states are computed. For each state, the LA sets of its items are computed first. Once they have been computed, the LA sets of items from where transitions to other states are possible are then propagated to the matching kernel items of those destination states. When the LA sets of kernel items of those destination states are enlarged then their state indices are added to a set tt(todo). LA sets of the items of states whose indices are stored in the tt(todo) set are (re)computed (by calling tt(determineLAsets) for those states) until tt(todo) is empty, at which point all LA sets have been computed. Initially tt(todo) only contains 0, the index of the initial state, representing the augmented grammar's production rule. To compute the LA sets of a state's items the LA set of each of its kernel items is distributed (by the member tt(State::distributeLAsetOf)) over the items which are implied by the item being considered. E.g., for item tt(X: a . Y z), where tt(a) and tt(z) are any sequence of grammar symbols and tt(X) and tt(Y) are non-terminal symbols, all of tt(Y's) production rules are added as new items to the current state. Then the member tt(distributeLAsetOfItem(idx)) matches the item's rule specification with the specification tt(a.Bc), where tt(a) and tt(c) are (possibly empty) sequences of grammatical symbols, and tt(B) is a (possibly empty) non-terminal symbol appearing immediately to the right of the item's dot position. if tt(B) is empty then there are no additional production rules and tt(distributeLAsetOf) may return. Otherwise, the set tt(b = FIRST(c)) is computed. This set holds all symbols which may follow tt(B). If tt(b) contains epsilon() (i.e., the element representing the empty set), then the currently defined LA set of the item can also be observed. In that case epsilon() is removed, and the currently defined LA set is added to tt(b). Finally, the LA sets of all items representing a production rule for tt(B) are inspected: if tt(b) contains unique elements compared to the LA sets of those items, then the unique elements of tt(b) are added to the LA sets of those items. Finally, tt(distributeLAsetOfItem) is recursively called for those items whose LA sets were enlarged. Once the LA sets of the items of a state have thus been computed, tt(inspectTransitions) is called to propagate the LA sets of items from where transitions to other states are possible to the affected (kernel) items of those other (destination) states. The member tt(inspectTransitions) inspects all tt(Next) objects of the current state's tt(d_nextVector). Next objects provide itemization( it() the state index of a state to transfer to from the current state; it() a size_t vector of item transitions. Each element is the index of an item in the current state (the source-item), its index is the index of a (kernel) item of the state to transfer to (the destination index). ) If the LA set of a destination item can be enlarged from the LA set of the source item then the LA sets of the destination state's items must be recomputed. This is realized by inserting the destation state's index into the `todo' set. To illustrate an LA-set computation we will now compute the LA sets of (some of) the items of the states of the grammar introduced at the beginning of this chapter. Its augmented grammar consists of the following production rules: verb( 1. start: start expr 2. start: // empty 3. expr: NR 4. expr: expr '+' expr 5. start_$: start ) When analyzing this grammer, we found the following five states, consisting of several items and transitions (kernel items are marked with K following their item indices). Next to the items, where applicable, the goto-table is shown: the state to go to when the mentioned grammatical symbol has been recognized: verb( Goto table ----------- State 0: start 0K: start_$ -> . start 1 1: start -> . start expr 1 2: start -> . State 1: expr NR 0K: start_$ -> start . 1K: start -> start . expr 2 2: expr -> . NR 3 3: expr -> . expr '+' expr State 2: '+' 0K: start -> start expr . 1K: expr -> expr . '+' expr 4 State 3: 0K: expr -> NR . State 4: expr NR 0K: expr -> expr '+' . expr 5 1: expr -> . NR 3 2: expr -> . expr '+' expr 5 State 5: '+' 0K: expr -> expr '+' expr . 1K: expr -> expr . '+' expr 4 ) Item 0 of state 0 by definition has LA symbol $, and LA computation therefore always starts at item 0 of state 0. The interesting part of the LA set computation is encountered in the recursive member tt(distributeLAsets): verb( distributeLAsetsOfItem(0) start_$ -> . start: LA: {$}, B: start, c: {}, so b: {$} items 1 and 2 refer to production rules of B (start) and are inspected: 1: LA(1): {}: b contains unique elements. Therefore: LA(1) = {$} distributeLAsetsOfItem(1): start -> . start expr: LA: {$}, B: start, c: {expr}, so b: {NR} inspect items 1 and 2 as they refer to production rules of B (start): 1: LA(1): {}: b contains unique elements. Therefore: LA(1) = {$,NR} distributeLAsetsOfItem(1) start -> . start expr: LA: {$,NR}, B: start, c: {expr}, so b: {NR} inspect items 1 and 2 as they refer to prod. rules of B (start): 1: LA(1): {$,NR}, so b does not contain unique elements: done 2: LA(2): {}, b contains unique elements LA(2) = {NR} distributeLAsetsOfItem(2) start -> .: LA: {NR}, B: -, c: {}, so b: {NR} inspect items 1 and 2 as they refer to prod. rules of B (start): 1: LA(1): {$,NR}, b does not contain unique elements: done 2: LA(2): {NR}, so b does not contain unique elements: done 2: LA(2): {NR}, so b does not contain unique elements: done 2: LA(2): {NR}: b contains unique elements. Therefore: LA(2) = {$,NR} distributeLAsetsOfItem(2) start -> .: LA: {$,NR}, B: -, c: {} B empty, so return. ) So, item 0 has LA set tt({$}), items 1 and 2 have LA sets tt({$,NR}). The next step involves propagating the LA sets to kernel items of the states to where transitions are possible: itemization( it() Item 0, state 0 transits to item 0 state 1. Item 0 of state 1's current LA set is empty, so it receives LA set tt({$}), and 1 (state 1's index) is inserted into the tt(todo) set. it() Item 1, state 0 transits to item 1 state 1. Item 1 of state 1's current LA set is empty, so it receives LA set tt({$,NR}), and 1 (state 1's index) is inserted into the tt(todo) set. ) Following this LA set propagation the LA sets of all items of state 1 are computed, which in turn is followed by LA propagation to other states (states 2 and 3), etc. etc. In this grammar there are no transitions to the current state (i.e., transitions from state x to state x). If such transitions are encountered then they can be ignored by tt(inspectTransitions) as the LA sets of the items of a state have already be computed by the time tt(inspectTransitions) is called. bisonc++-4.13.01/documentation/manual/algorithm/mysterious.yo0000644000175000017500000000614312633316117023166 0ustar frankfrankSometimes reduce/reduce conflicts occur that are puzzling at first sight. Here is an example: verb( %token ID %% def: param_spec return_spec ',' ; param_spec: type | name_list ':' type ; return_spec: type | name ':' type ; type: ID ; name: ID ; name_list: name | name ',' name_list ; ) It would seem that this grammar can be parsed with only a single look-ahead token: when a param_spec is being read, an tt(ID) is a tt(name) if a comma or colon follows, or a tt(type) if another tt(ID) follows. In other words, this grammar is LR(1). However, b(), like most parser generators, cannot actually handle all LR(1) grammars. In this grammar two contexts, one after an tt(ID) at the beginning of a tt(param_spec) and another one at the beginning of a tt(return_spec), are similar enough for b() to assume that they are identical. They appear similar because the same set of rules would be active--the rule for reducing to a name and that for reducing to a type. B() is unable to determine at that stage of processing that the rules would require different look-ahead tokens in the two contexts, so it makes a single parser state for them both. Combining the two contexts causes a conflict later. In parser terminology, this occurrence means that the grammar is not LALR(1). In general, it is better to fix deficiencies than to document them. But this particular deficiency is intrinsically hard to fix; parser generators that can handle LR(1) grammars are hard to write and tend to produce parsers that are very large. In practice, b() is more useful the way it's currently operating. When the problem arises, you can often fix it by identifying the two parser states that are being confused, and adding something to make them look distinct. In the above example, adding one rule to tt(return_spec) as follows makes the problem go away: verb( %token BOGUS ... %% ... return_spec: type | name ':' type | ID BOGUS // This rule is never used. ; ) This corrects the problem because it introduces the possibility of an additional active rule in the context after the tt(ID) at the beginning of tt(return_spec). This rule is not active in the corresponding context in a tt(param_spec), so the two contexts receive distinct parser states. As long as the token tt(BOGUS) is never generated by the parser's member function tt(lex()), the added rule cannot alter the way actual input is parsed. In this particular example, there is another way to solve the problem: rewrite the rule for tt(return_spec) to use tt(ID) directly instead of via name. This also causes the two confusing contexts to have different sets of active rules, because the one for tt(return_spec) activates the altered rule for tt(return_spec) rather than the one for name. verb( param_spec: type | name_list ':' type ; return_spec: type | ID ':' type ; ) bisonc++-4.13.01/documentation/manual/algorithm/howprec.yo0000644000175000017500000000170412633316117022410 0ustar frankfrankThe first effect of the precedence directives is to assign precedence levels to the terminal symbols declared. The second effect is to assign precedence levels to certain rules: each rule gets its precedence from the last terminal symbol mentioned in the components. (You can also specify explicitly the precedence of a rule. See section ref(CONDEP)). Finally, the resolution of conflicts works by comparing the precedence of the rule being considered with that of the look-ahead token. If the token's precedence is higher, the choice is to shift. If the rule's precedence is higher, the choice is to reduce. If they have equal precedence, the choice is made based on the associativity of that precedence level. The verbose output file made by `tt(-V)' (see section ref(INVOKING)) shows how each conflict was resolved. Not all rules and not all tokens have precedence. If either the rule or the look-ahead token has no precedence, then the default is to shift. bisonc++-4.13.01/documentation/manual/algorithm/precedence.yo0000644000175000017500000000045512633316117023040 0ustar frankfrankShift/reduce conflicts are frequently encountered in grammars specifying rules of arithmetic expressions. Here shifting is not always the preferred resolution; the b() directives for operator precedence allow you to specify when to shift and when to reduce. How and when to do so is discussed next. bisonc++-4.13.01/documentation/manual/algorithm/precdemos.yo0000644000175000017500000000111712633316117022720 0ustar frankfrankIn our example, we would want the following declarations: verb( %left '<' %left '-' %left '*' ) In a more complete example, which supports other operators as well, we would declare them in groups of equal precedence. For example, 'tt(+)' is declared with 'tt(-)': verb( %left '<' '>' '=' NE LE GE %left '+' '-' %left '*' '/' ) (Here tt(NE) and so on stand for the operators for `not equal' and so on. We assume that these tokens are more than one character long and therefore are represented by names, not character literals.) bisonc++-4.13.01/documentation/manual/algorithm/lookaheads.yo0000644000175000017500000000401512633316117023051 0ustar frankfrankIn the previous section a grammer was discussed whose fifth state contained two items: one resulting in a shift-action, the other resulting in a reduce-action. This state contained these two items: itemization( it() item 0: tt(expr -> expr '+' expr .) it() item 1: tt(expr -> expr . '+' expr) ) Although this state in theory defines two different actions, in practice only one is used. This is a direct consequence of the tt(%left '+') specification, which is explained in this and the next section. When analyzing a grammar all states that can be reached from the augmented start rule are determined. In the current grammar's fifth state b() must decide which action to take: should it shift on tt('+') or should it reduce according to the item `tt(expr -> expr '+' expr .)'? What choice will b() make? Here the fact that b() implements a parser for a em(Look Ahead Left to Right (1)) (LALR(1)) grammar becomes relevant. B() computes em(look-ahead sets) to determine which alternative to select when confronted with a choice. The look-ahead set can be used to favor one action over another when generating tables for the parsing function. Sometimes the look-ahead sets allow b() simply to remove one action from the set of possible actions. When b() is called to process the example grammar while specifying the tt(--construction) option state five em(only) shows the reduction and em(not) the shifting action, as b() has removed that latter action from the action set. In state five the choice is between shifting a tt('+') token on the stack, or reducing the stack according to the rule verb( expr -> expr '+' expr ) Here, as we will shortly see, the tt('+') is em(also) an element of the em(look-ahead set) of the reducible item, creating a conflict: what to do on tt('+')? In this case the grammar designer has provided b() with a way out: the tt(%left) directive tells b() to favor a reduction over a shift, and so it removed tt(expr -> expr . '+' expr) from its set of actions in state five. bisonc++-4.13.01/documentation/manual/algorithm/intro.yo0000644000175000017500000001022212633316117022067 0ustar frankfrankThis chapter describes the algorithm that is used by b(). Generating parsers of course begins with a grammar to be analyzed. The analysis consists of these steps: itemization( it() First, a set of tt(FIRST) tokens is determined. The tt(FIRST) set of a nonterminal defines all terminal tokens that can be encountered when beginning to recognize that nonterminal. it() Having determined the tt(FIRST) set, the grammar itself is analyzed. Starting from the start rule all possible syntactically correct derivations of the grammar are determined. it() At each state actions are defined that are eventually used by the parser to determine what to do in a particular state. These actions are based on the next available token and may either involve a transition to another state (called a em(shift), since it involves processing a token by `shifting' over the next part of the input) or it may involve a reduction (a em(reduce) action), in which the parser stack's size is somewhat reduced. In the latter case, if an em(action) is associated with a particular rule, the action is executed as a side effect of the reduction. it() Sometimes the analysis takes the parser generator to a state in which a choice between a em(shift) or a em(reduce) action must be made. This is called a em(shift-reduce) conflict. Sometimes the parser generator is able to solve these conflicts itself, by looking at the next available token (the em(look-ahead) token). If the next token is not a possible continuation for either a shift or a reduce action that particular continuation can be discarded. Likewise, two reductions may be encountered in a state (a em(reduce-reduce) conflict). Here the same reasoning can be applied, maybe resulting in discarding one of the possible reductions. In all other cases (i.e., if the look-ahead token is possible for both continuations) a true conflict is observed which somehow is solved by the grammar designer. If the designer doesn't select an particular action, the parser generator reports a conflict and selects a default action. By default a em(shift) is used, rather than a em(reduce). With a em(reduce-reduce) conflict the default action is to reduce by the rule listed first in the grammar. it() In cases where input is not structured according to the rules of the grammar, a em(syntactic error) is observed. Error recovery may be attempted, to allow the parser to continue parsing. This is a desirable characteristic since it provides the user of a program with a full syntactic error report, rather than one error at a time. it() Following the analysis of the grammar, code is generated and the parsing algorithm (implemented in the parser's tt(parse()) function) processes input according to the tables generated by the parser generator. ) All the above phases are illustrated and discussed in the next sections. Additional details of the parsing process can be found in various books about compiler construction, e.g., in Aho, Sethi and Ullman's (2003) book bf(Compilers) (Addison-Wesley). In the sections below, the following grammar is used to illustrate the various phases: verb( %token NR %left '+' %% start: start expr | // empty ; expr: NR | expr '+' expr ; ) The grammar is interesting as it has a rule containing an empty alternative and because it harbors a shift-reduce conflict. The shift-reduce conflict is solved by explictly assigning a priority and association to the tt('+') token. The analysis starts by defining an additional rule, which is recognized (reduced) at end of input. This rule and the rules specified in the grammar together define what is known as the em(augmented grammar). In the coming sections the symbol tt($) is used to indicate `end of input'. From the above grammar the following augmented grammar is derived: verb( 1. start: start expr 2. start: // empty 3. expr: NR 4. expr: expr '+' expr 5. start_$: start (input ends here) ) b() itself produces an extensive analysis of any grammar it is offered when the option tt(--construction) is provided. bisonc++-4.13.01/documentation/manual/algorithm/pstates.yo0000644000175000017500000000326612633316117022431 0ustar frankfrankThe member function tt(parse()) is implemented using a finite-state machine. The values pushed on the parser stack are not simply token type codes; they represent the entire sequence of terminal and nonterminal symbols at or near the top of the stack. The current state collects all the information about previous input which is relevant to deciding what to do next. Each time a look-ahead token is read, the current parser state together with the current (not yet processed) token are looked up in a table. This table entry can say em(Shift the token). This also specifies a new parser state, which is then pushed onto the top of the parser stack. Or it can say em(Reduce using rule number n). This means that a certain number of tokens or groupings are taken off the top of the stack, and that the rule's grouping becomes the `next token' to be considered. That `next token' is then used in combination with the state then at the stack's top, to determine the next state to consider. This (next) state is then again pushed on the stack, and a new token is requested from the lexical scanner, and the process repeats itself. There are two special situations the parsing algorithm must consider: itemization( it() First, the lexical scanner may reach em(end-of-input). If the current state on top of the parser's stack is the start-state, then the reduction (which is called for in this situation) is in fact the (successful) end of the parsing process, and tt(parse()) returns the value 0, indicating a successful parsing. it() There is one other alternative: the table can say that the token is erroneous in the current state. This causes error processing to begin (see chapter ref(RECOVERY)). ) bisonc++-4.13.01/documentation/manual/algorithm/examples/0000755000175000017500000000000012633316117022204 5ustar frankfrankbisonc++-4.13.01/documentation/manual/algorithm/examples/mandayam0000644000175000017500000000041212633316117023713 0ustar frankfrank%debug %token ID %left '+' '-' %left '*' '/' %right UNARY %% expr: expr '+' term | expr '-' term | term ; term: term '*' primary | term '/' primary | primary ; primary: '-' expr %prec UNARY | '+' expr %prec UNARY | ID ; bisonc++-4.13.01/documentation/manual/algorithm/examples/Parser.ih0000644000175000017500000000056012633316117023763 0ustar frankfrank // Include this file in the sources of the class Parser. // $insert class.h #include "Parser.h" // Add below here any includes etc. that are only // required for the compilation of Parser's sources. // UN-comment the next using-declaration if you want to use // symbols from the namespace std without specifying std:: //using namespace std; bisonc++-4.13.01/documentation/manual/algorithm/examples/rr10000644000175000017500000000054312633316117022635 0ustar frankfrank%stype char * %token WORD %% sequence: // empty { cout << "empty sequence\n"; } | maybeword | sequence WORD { cout << "added word " << $2 << endl; } ; maybeword: // empty { cout << "empty maybeword\n"; } | WORD { cout << "single word " << $1 << endl; } ; bisonc++-4.13.01/documentation/manual/algorithm/examples/Parserbase.h0000644000175000017500000000545312633316117024453 0ustar frankfrank// Generated by Bisonc++ V2.7.1 on Sat, 07 Aug 2010 21:39:39 +0200 #ifndef ParserBase_h_included #define ParserBase_h_included #include #include // $insert debugincludes #include #include #include #include #include namespace // anonymous { struct PI__; } class ParserBase { public: // $insert tokens // Symbolic tokens: enum Tokens__ { ID = 257, UNARY, }; // $insert STYPE typedef int STYPE__; private: int d_stackIdx__; std::vector d_stateStack__; std::vector d_valueStack__; protected: enum Return__ { PARSE_ACCEPT__ = 0, // values used as parse()'s return values PARSE_ABORT__ = 1 }; enum ErrorRecovery__ { DEFAULT_RECOVERY_MODE__, UNEXPECTED_TOKEN__, }; bool d_debug__; size_t d_nErrors__; size_t d_requiredTokens__; size_t d_acceptedTokens__; int d_token__; int d_nextToken__; size_t d_state__; STYPE__ *d_vsp__; STYPE__ d_val__; STYPE__ d_nextVal__; ParserBase(); // $insert debugdecl static std::ostringstream s_out__; std::string symbol__(int value) const; std::string stype__(char const *pre, STYPE__ const &semVal, char const *post = "") const; static std::ostream &dflush__(std::ostream &out); void ABORT() const; void ACCEPT() const; void ERROR() const; void clearin(); bool debug() const; void pop__(size_t count = 1); void push__(size_t nextState); void popToken__(); void pushToken__(int token); void reduce__(PI__ const &productionInfo); void errorVerbose__(); size_t top__() const; public: void setDebug(bool mode); }; inline bool ParserBase::debug() const { return d_debug__; } inline void ParserBase::setDebug(bool mode) { d_debug__ = mode; } inline void ParserBase::ABORT() const { // $insert debug if (d_debug__) s_out__ << "ABORT(): Parsing unsuccessful" << "\n" << dflush__; throw PARSE_ABORT__; } inline void ParserBase::ACCEPT() const { // $insert debug if (d_debug__) s_out__ << "ACCEPT(): Parsing successful" << "\n" << dflush__; throw PARSE_ACCEPT__; } inline void ParserBase::ERROR() const { // $insert debug if (d_debug__) s_out__ << "ERROR(): Forced error condition" << "\n" << dflush__; throw UNEXPECTED_TOKEN__; } // As a convenience, when including ParserBase.h its symbols are available as // symbols in the class Parser, too. #define Parser ParserBase #endif bisonc++-4.13.01/documentation/manual/algorithm/examples/notpeculiar0000644000175000017500000000031312633316117024451 0ustar frankfrank %token ID %left '-' %left '*' %right UNARY %% expr: expr '-' expr | expr '*' expr | '-' expr %prec UNARY | ID ; bisonc++-4.13.01/documentation/manual/algorithm/examples/dangling.tab.c0000644000175000017500000007027512633316117024713 0ustar frankfrank/* A Bison parser, made from dangling, by GNU bison 1.75. */ /* Skeleton parser for Yacc-like parsing with Bison, Copyright (C) 1984, 1989, 1990, 2000, 2001, 2002 Free Software Foundation, Inc. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /* As a special exception, when this file is copied by Bison into a Bison output file, you may use that output file without restriction. This special exception was added by the Free Software Foundation in version 1.24 of Bison. */ /* Written by Richard Stallman by simplifying the original so called ``semantic'' parser. */ /* All symbols defined below should begin with yy or YY, to avoid infringing on user name space. This should be done even for local variables, as they might otherwise be expanded by user macros. There are some unavoidable exceptions within include files to define necessary library symbols; they are noted "INFRINGES ON USER NAME SPACE" below. */ /* Identify Bison output. */ #define YYBISON 1 /* Pure parsers. */ #define YYPURE 0 /* Using locations. */ #define YYLSP_NEEDED 0 /* Tokens. */ #ifndef YYTOKENTYPE # define YYTOKENTYPE /* Put the tokens into the symbol table, so that GDB and other debuggers know about them. */ enum yytokentype { IF = 258, ELSE = 259, VAR = 260 }; #endif #define IF 258 #define ELSE 259 #define VAR 260 /* Copy the first part of user declarations. */ /* Enabling traces. */ #ifndef YYDEBUG # define YYDEBUG 0 #endif /* Enabling verbose error messages. */ #ifdef YYERROR_VERBOSE # undef YYERROR_VERBOSE # define YYERROR_VERBOSE 1 #else # define YYERROR_VERBOSE 0 #endif #ifndef YYSTYPE typedef int yystype; # define YYSTYPE yystype # define YYSTYPE_IS_TRIVIAL 1 #endif #ifndef YYLTYPE typedef struct yyltype { int first_line; int first_column; int last_line; int last_column; } yyltype; # define YYLTYPE yyltype # define YYLTYPE_IS_TRIVIAL 1 #endif /* Copy the second part of user declarations. */ /* Line 213 of /usr/share/bison/yacc.c. */ #line 104 "dangling.tab.c" #if ! defined (yyoverflow) || YYERROR_VERBOSE /* The parser invokes alloca or malloc; define the necessary symbols. */ # if YYSTACK_USE_ALLOCA # define YYSTACK_ALLOC alloca # else # ifndef YYSTACK_USE_ALLOCA # if defined (alloca) || defined (_ALLOCA_H) # define YYSTACK_ALLOC alloca # else # ifdef __GNUC__ # define YYSTACK_ALLOC __builtin_alloca # endif # endif # endif # endif # ifdef YYSTACK_ALLOC /* Pacify GCC's `empty if-body' warning. */ # define YYSTACK_FREE(Ptr) do { /* empty */; } while (0) # else # if defined (__STDC__) || defined (__cplusplus) # include /* INFRINGES ON USER NAME SPACE */ # define YYSIZE_T size_t # endif # define YYSTACK_ALLOC malloc # define YYSTACK_FREE free # endif #endif /* ! defined (yyoverflow) || YYERROR_VERBOSE */ #if (! defined (yyoverflow) \ && (! defined (__cplusplus) \ || (YYLTYPE_IS_TRIVIAL && YYSTYPE_IS_TRIVIAL))) /* A type that is properly aligned for any stack member. */ union yyalloc { short yyss; YYSTYPE yyvs; }; /* The size of the maximum gap between one aligned stack and the next. */ # define YYSTACK_GAP_MAX (sizeof (union yyalloc) - 1) /* The size of an array large to enough to hold all stacks, each with N elements. */ # define YYSTACK_BYTES(N) \ ((N) * (sizeof (short) + sizeof (YYSTYPE)) \ + YYSTACK_GAP_MAX) /* Copy COUNT objects from FROM to TO. The source and destination do not overlap. */ # ifndef YYCOPY # if 1 < __GNUC__ # define YYCOPY(To, From, Count) \ __builtin_memcpy (To, From, (Count) * sizeof (*(From))) # else # define YYCOPY(To, From, Count) \ do \ { \ register YYSIZE_T yyi; \ for (yyi = 0; yyi < (Count); yyi++) \ (To)[yyi] = (From)[yyi]; \ } \ while (0) # endif # endif /* Relocate STACK from its old location to the new one. The local variables YYSIZE and YYSTACKSIZE give the old and new number of elements in the stack, and YYPTR gives the new location of the stack. Advance YYPTR to a properly aligned location for the next stack. */ # define YYSTACK_RELOCATE(Stack) \ do \ { \ YYSIZE_T yynewbytes; \ YYCOPY (&yyptr->Stack, Stack, yysize); \ Stack = &yyptr->Stack; \ yynewbytes = yystacksize * sizeof (*Stack) + YYSTACK_GAP_MAX; \ yyptr += yynewbytes / sizeof (*yyptr); \ } \ while (0) #endif #if defined (__STDC__) || defined (__cplusplus) typedef signed char yysigned_char; #else typedef short yysigned_char; #endif /* YYFINAL -- State number of the termination state. */ #define YYFINAL 6 #define YYLAST 10 /* YYNTOKENS -- Number of terminals. */ #define YYNTOKENS 9 /* YYNNTS -- Number of nonterminals. */ #define YYNNTS 2 /* YYNRULES -- Number of rules. */ #define YYNRULES 4 /* YYNRULES -- Number of states. */ #define YYNSTATES 12 /* YYTRANSLATE(YYLEX) -- Bison symbol number corresponding to YYLEX. */ #define YYUNDEFTOK 2 #define YYMAXUTOK 260 #define YYTRANSLATE(X) \ ((size_t)(X) <= YYMAXUTOK ? yytranslate[X] : YYUNDEFTOK) /* YYTRANSLATE[YYLEX] -- Bison symbol number corresponding to YYLEX. */ static const size_t char yytranslate[] = { 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 7, 8, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 3, 4, 5 }; #if YYDEBUG /* YYPRHS[YYN] -- Index of the first RHS symbol of rule number YYN in YYRHS. */ static const size_t char yyprhs[] = { 0, 0, 3, 6, 12 }; /* YYRHS -- A `-1'-separated list of the rules' RHS. */ static const yysigned_char yyrhs[] = { 10, 0, -1, 5, 6, -1, 3, 7, 5, 8, 10, -1, 3, 7, 5, 8, 10, 4, 10, -1 }; /* YYRLINE[YYN] -- source line where rule number YYN was defined. */ static const size_t char yyrline[] = { 0, 5, 5, 7, 9 }; #endif #if YYDEBUG || YYERROR_VERBOSE /* YYTNME[SYMBOL-NUM] -- String name of the symbol SYMBOL-NUM. First, the terminals, then, starting at YYNTOKENS, nonterminals. */ static const char *const yytname[] = { "$end", "error", "$undefined", "IF", "ELSE", "VAR", "';'", "'('", "')'", "$accept", "stmt", 0 }; #endif # ifdef YYPRINT /* YYTOKNUM[YYLEX-NUM] -- Internal token number corresponding to token YYLEX-NUM. */ static const size_t short yytoknum[] = { 0, 256, 257, 258, 259, 260, 59, 40, 41 }; # endif /* YYR1[YYN] -- Symbol number of symbol that rule YYN derives. */ static const size_t char yyr1[] = { 0, 9, 10, 10, 10 }; /* YYR2[YYN] -- Number of symbols composing right hand side of rule YYN. */ static const size_t char yyr2[] = { 0, 2, 2, 5, 7 }; /* YYDEFACT[STATE-NAME] -- Default rule to reduce with in state STATE-NUM when YYTABLE doesn't specify something else to do. Zero means the default is an error. */ static const size_t char yydefact[] = { 0, 0, 0, 0, 0, 2, 1, 0, 0, 3, 0, 4 }; /* YYDEFGOTO[NTERM-NUM]. */ static const yysigned_char yydefgoto[] = { -1, 3 }; /* YYPACT[STATE-NUM] -- Index in YYTABLE of the portion describing STATE-NUM. */ #define YYPACT_NINF -8 static const yysigned_char yypact[] = { -3, -2, 0, 4, 2, -8, -8, 1, -3, 6, -3, -8 }; /* YYPGOTO[NTERM-NUM]. */ static const yysigned_char yypgoto[] = { -8, -7 }; /* YYTABLE[YYPACT[STATE-NUM]]. What to do in state STATE-NUM. If positive, shift that token. If negative, reduce the rule which number is the opposite. If zero, do what YYDEFACT says. If YYTABLE_NINF, parse error. */ #define YYTABLE_NINF -1 static const size_t char yytable[] = { 1, 9, 2, 11, 6, 4, 5, 7, 0, 8, 10 }; static const yysigned_char yycheck[] = { 3, 8, 5, 10, 0, 7, 6, 5, -1, 8, 4 }; /* YYSTOS[STATE-NUM] -- The (internal number of the) accessing symbol of state STATE-NUM. */ static const size_t char yystos[] = { 0, 3, 5, 10, 7, 6, 0, 5, 8, 10, 4, 10 }; #if ! defined (YYSIZE_T) && defined (__SIZE_TYPE__) # define YYSIZE_T __SIZE_TYPE__ #endif #if ! defined (YYSIZE_T) && defined (size_t) # define YYSIZE_T size_t #endif #if ! defined (YYSIZE_T) # if defined (__STDC__) || defined (__cplusplus) # include /* INFRINGES ON USER NAME SPACE */ # define YYSIZE_T size_t # endif #endif #if ! defined (YYSIZE_T) # define YYSIZE_T size_t int #endif #define yyerrok (yyerrstatus = 0) #define yyclearin (yychar = YYEMPTY) #define YYEMPTY -2 #define YYEOF 0 #define YYACCEPT goto yyacceptlab #define YYABORT goto yyabortlab #define YYERROR goto yyerrlab1 /* Like YYERROR except do call yyerror. This remains here temporarily to ease the transition to the new meaning of YYERROR, for GCC. Once GCC version 2 has supplanted version 1, this can go. */ #define YYFAIL goto yyerrlab #define YYRECOVERING() (!!yyerrstatus) #define YYBACKUP(Token, Value) \ do \ if (yychar == YYEMPTY && yylen == 1) \ { \ yychar = (Token); \ yylval = (Value); \ yychar1 = YYTRANSLATE (yychar); \ YYPOPSTACK; \ goto yybackup; \ } \ else \ { \ yyerror ("syntax error: cannot back up"); \ YYERROR; \ } \ while (0) #define YYTERROR 1 #define YYERRCODE 256 /* YYLLOC_DEFAULT -- Compute the default location (before the actions are run). */ #ifndef YYLLOC_DEFAULT # define YYLLOC_DEFAULT(Current, Rhs, N) \ Current.first_line = Rhs[1].first_line; \ Current.first_column = Rhs[1].first_column; \ Current.last_line = Rhs[N].last_line; \ Current.last_column = Rhs[N].last_column; #endif /* YYLEX -- calling `yylex' with the right arguments. */ #define YYLEX yylex () /* Enable debugging if requested. */ #if YYDEBUG # ifndef YYFPRINTF # include /* INFRINGES ON USER NAME SPACE */ # define YYFPRINTF fprintf # endif # define YYDPRINTF(Args) \ do { \ if (yydebug) \ YYFPRINTF Args; \ } while (0) # define YYDSYMPRINT(Args) \ do { \ if (yydebug) \ yysymprint Args; \ } while (0) /* Nonzero means print parse trace. It is left uninitialized so that multiple parsers can coexist. */ int yydebug; #else /* !YYDEBUG */ # define YYDPRINTF(Args) # define YYDSYMPRINT(Args) #endif /* !YYDEBUG */ /* YYINITDEPTH -- initial size of the parser's stacks. */ #ifndef YYINITDEPTH # define YYINITDEPTH 200 #endif /* YYMAXDEPTH -- maximum size the stacks can grow to (effective only if the built-in stack extension method is used). Do not make this value too large; the results are undefined if SIZE_MAX < YYSTACK_BYTES (YYMAXDEPTH) evaluated with infinite-precision integer arithmetic. */ #if YYMAXDEPTH == 0 # undef YYMAXDEPTH #endif #ifndef YYMAXDEPTH # define YYMAXDEPTH 10000 #endif #if YYERROR_VERBOSE # ifndef yystrlen # if defined (__GLIBC__) && defined (_STRING_H) # define yystrlen strlen # else /* Return the length of YYSTR. */ static YYSIZE_T # if defined (__STDC__) || defined (__cplusplus) yystrlen (const char *yystr) # else yystrlen (yystr) const char *yystr; # endif { register const char *yys = yystr; while (*yys++ != '\0') continue; return yys - yystr - 1; } # endif # endif # ifndef yystpcpy # if defined (__GLIBC__) && defined (_STRING_H) && defined (_GNU_SOURCE) # define yystpcpy stpcpy # else /* Copy YYSRC to YYDEST, returning the address of the terminating '\0' in YYDEST. */ static char * # if defined (__STDC__) || defined (__cplusplus) yystpcpy (char *yydest, const char *yysrc) # else yystpcpy (yydest, yysrc) char *yydest; const char *yysrc; # endif { register char *yyd = yydest; register const char *yys = yysrc; while ((*yyd++ = *yys++) != '\0') continue; return yyd - 1; } # endif # endif #endif /* !YYERROR_VERBOSE */ #if YYDEBUG /*-----------------------------. | Print this symbol on YYOUT. | `-----------------------------*/ static void #if defined (__STDC__) || defined (__cplusplus) yysymprint (FILE* yyout, int yytype, YYSTYPE yyvalue) #else yysymprint (yyout, yytype, yyvalue) FILE* yyout; int yytype; YYSTYPE yyvalue; #endif { /* Pacify ``unused variable'' warnings. */ (void) yyvalue; if (yytype < YYNTOKENS) { YYFPRINTF (yyout, "token %s (", yytname[yytype]); # ifdef YYPRINT YYPRINT (yyout, yytoknum[yytype], yyvalue); # endif } else YYFPRINTF (yyout, "nterm %s (", yytname[yytype]); switch (yytype) { default: break; } YYFPRINTF (yyout, ")"); } #endif /* YYDEBUG. */ /*-----------------------------------------------. | Release the memory associated to this symbol. | `-----------------------------------------------*/ static void #if defined (__STDC__) || defined (__cplusplus) yydestruct (int yytype, YYSTYPE yyvalue) #else yydestruct (yytype, yyvalue) int yytype; YYSTYPE yyvalue; #endif { /* Pacify ``unused variable'' warnings. */ (void) yyvalue; switch (yytype) { default: break; } } /* The user can define YYPARSE_PARAM as the name of an argument to be passed into yyparse. The argument should have type void *. It should actually point to an object. Grammar actions can access the variable by casting it to the proper pointer type. */ #ifdef YYPARSE_PARAM # if defined (__STDC__) || defined (__cplusplus) # define YYPARSE_PARAM_ARG void *YYPARSE_PARAM # define YYPARSE_PARAM_DECL # else # define YYPARSE_PARAM_ARG YYPARSE_PARAM # define YYPARSE_PARAM_DECL void *YYPARSE_PARAM; # endif #else /* !YYPARSE_PARAM */ # define YYPARSE_PARAM_ARG # define YYPARSE_PARAM_DECL #endif /* !YYPARSE_PARAM */ /* Prevent warning if -Wstrict-prototypes. */ #ifdef __GNUC__ # ifdef YYPARSE_PARAM int yyparse (void *); # else int yyparse (void); # endif #endif /* The lookahead symbol. */ int yychar; /* The semantic value of the lookahead symbol. */ YYSTYPE yylval; /* Number of parse errors so far. */ int yynerrs; int yyparse (YYPARSE_PARAM_ARG) YYPARSE_PARAM_DECL { register int yystate; register int yyn; int yyresult; /* Number of tokens to shift before error messages enabled. */ int yyerrstatus; /* Lookahead token as an internal (translated) token number. */ int yychar1 = 0; /* Three stacks and their tools: `yyss': related to states, `yyvs': related to semantic values, `yyls': related to locations. Refer to the stacks thru separate pointers, to allow yyoverflow to reallocate them elsewhere. */ /* The state stack. */ short yyssa[YYINITDEPTH]; short *yyss = yyssa; register short *yyssp; /* The semantic value stack. */ YYSTYPE yyvsa[YYINITDEPTH]; YYSTYPE *yyvs = yyvsa; register YYSTYPE *yyvsp; #define YYPOPSTACK (yyvsp--, yyssp--) YYSIZE_T yystacksize = YYINITDEPTH; /* The variables used to return semantic value and location from the action routines. */ YYSTYPE yyval; /* When reducing, the number of symbols on the RHS of the reduced rule. */ int yylen; YYDPRINTF ((stderr, "Starting parse\n")); yystate = 0; yyerrstatus = 0; yynerrs = 0; yychar = YYEMPTY; /* Cause a token to be read. */ /* Initialize stack pointers. Waste one element of value and location stack so that they stay on the same level as the state stack. The wasted elements are never initialized. */ yyssp = yyss; yyvsp = yyvs; goto yysetstate; /*------------------------------------------------------------. | yynewstate -- Push a new state, which is found in yystate. | `------------------------------------------------------------*/ yynewstate: /* In all cases, when you get here, the value and location stacks have just been pushed. so pushing a state here evens the stacks. */ yyssp++; yysetstate: *yyssp = yystate; if (yyssp >= yyss + yystacksize - 1) { /* Get the current used size of the three stacks, in elements. */ YYSIZE_T yysize = yyssp - yyss + 1; #ifdef yyoverflow { /* Give user a chance to reallocate the stack. Use copies of these so that the &'s don't force the real ones into memory. */ YYSTYPE *yyvs1 = yyvs; short *yyss1 = yyss; /* Each stack pointer address is followed by the size of the data in use in that stack, in bytes. This used to be a conditional around just the two extra args, but that might be undefined if yyoverflow is a macro. */ yyoverflow ("parser stack overflow", &yyss1, yysize * sizeof (*yyssp), &yyvs1, yysize * sizeof (*yyvsp), &yystacksize); yyss = yyss1; yyvs = yyvs1; } #else /* no yyoverflow */ # ifndef YYSTACK_RELOCATE goto yyoverflowlab; # else /* Extend the stack our own way. */ if (yystacksize >= YYMAXDEPTH) goto yyoverflowlab; yystacksize *= 2; if (yystacksize > YYMAXDEPTH) yystacksize = YYMAXDEPTH; { short *yyss1 = yyss; union yyalloc *yyptr = (union yyalloc *) YYSTACK_ALLOC (YYSTACK_BYTES (yystacksize)); if (! yyptr) goto yyoverflowlab; YYSTACK_RELOCATE (yyss); YYSTACK_RELOCATE (yyvs); # undef YYSTACK_RELOCATE if (yyss1 != yyssa) YYSTACK_FREE (yyss1); } # endif #endif /* no yyoverflow */ yyssp = yyss + yysize - 1; yyvsp = yyvs + yysize - 1; YYDPRINTF ((stderr, "Stack size increased to %lu\n", (size_t long int) yystacksize)); if (yyssp >= yyss + yystacksize - 1) YYABORT; } YYDPRINTF ((stderr, "Entering state %d\n", yystate)); goto yybackup; /*-----------. | yybackup. | `-----------*/ yybackup: /* Do appropriate processing given the current state. */ /* Read a lookahead token if we need one and don't already have one. */ /* yyresume: */ /* First try to decide what to do without reference to lookahead token. */ yyn = yypact[yystate]; if (yyn == YYPACT_NINF) goto yydefault; /* Not known => get a lookahead token if don't already have one. */ /* yychar is either YYEMPTY or YYEOF or a valid token in external form. */ if (yychar == YYEMPTY) { YYDPRINTF ((stderr, "Reading a token: ")); yychar = YYLEX; } /* Convert token to internal form (in yychar1) for indexing tables with. */ if (yychar <= 0) /* This means end of input. */ { yychar1 = 0; yychar = YYEOF; /* Don't call YYLEX any more. */ YYDPRINTF ((stderr, "Now at end of input.\n")); } else { yychar1 = YYTRANSLATE (yychar); /* We have to keep this `#if YYDEBUG', since we use variables which are defined only if `YYDEBUG' is set. */ YYDPRINTF ((stderr, "Next token is ")); YYDSYMPRINT ((stderr, yychar1, yylval)); YYDPRINTF ((stderr, "\n")); } /* If the proper action on seeing token YYCHAR1 is to reduce or to detect an error, take that action. */ yyn += yychar1; if (yyn < 0 || YYLAST < yyn || yycheck[yyn] != yychar1) goto yydefault; yyn = yytable[yyn]; if (yyn <= 0) { if (yyn == 0 || yyn == YYTABLE_NINF) goto yyerrlab; yyn = -yyn; goto yyreduce; } if (yyn == YYFINAL) YYACCEPT; /* Shift the lookahead token. */ YYDPRINTF ((stderr, "Shifting token %d (%s), ", yychar, yytname[yychar1])); /* Discard the token being shifted unless it is eof. */ if (yychar != YYEOF) yychar = YYEMPTY; *++yyvsp = yylval; /* Count tokens shifted since error; after three, turn off error status. */ if (yyerrstatus) yyerrstatus--; yystate = yyn; goto yynewstate; /*-----------------------------------------------------------. | yydefault -- do the default action for the current state. | `-----------------------------------------------------------*/ yydefault: yyn = yydefact[yystate]; if (yyn == 0) goto yyerrlab; goto yyreduce; /*-----------------------------. | yyreduce -- Do a reduction. | `-----------------------------*/ yyreduce: /* yyn is the number of a rule to reduce with. */ yylen = yyr2[yyn]; /* If YYLEN is nonzero, implement the default value of the action: `$$ = $1'. Otherwise, the following line sets YYVAL to garbage. This behavior is undocumented and Bison users should not rely upon it. Assigning to YYVAL unconditionally makes the parser a bit smaller, and it avoids a GCC warning that YYVAL may be used uninitialized. */ yyval = yyvsp[1-yylen]; #if YYDEBUG /* We have to keep this `#if YYDEBUG', since we use variables which are defined only if `YYDEBUG' is set. */ if (yydebug) { int yyi; YYFPRINTF (stderr, "Reducing via rule %d (line %d), ", yyn - 1, yyrline[yyn]); /* Print the symbols being reduced, and their result. */ for (yyi = yyprhs[yyn]; yyrhs[yyi] >= 0; yyi++) YYFPRINTF (stderr, "%s ", yytname[yyrhs[yyi]]); YYFPRINTF (stderr, " -> %s\n", yytname[yyr1[yyn]]); } #endif switch (yyn) { } /* Line 1016 of /usr/share/bison/yacc.c. */ #line 911 "dangling.tab.c" yyvsp -= yylen; yyssp -= yylen; #if YYDEBUG if (yydebug) { short *yyssp1 = yyss - 1; YYFPRINTF (stderr, "state stack now"); while (yyssp1 != yyssp) YYFPRINTF (stderr, " %d", *++yyssp1); YYFPRINTF (stderr, "\n"); } #endif *++yyvsp = yyval; /* Now `shift' the result of the reduction. Determine what state that goes to, based on the state we popped back to and the rule number reduced by. */ yyn = yyr1[yyn]; yystate = yypgoto[yyn - YYNTOKENS] + *yyssp; if (0 <= yystate && yystate <= YYLAST && yycheck[yystate] == *yyssp) yystate = yytable[yystate]; else yystate = yydefgoto[yyn - YYNTOKENS]; goto yynewstate; /*------------------------------------. | yyerrlab -- here on detecting error | `------------------------------------*/ yyerrlab: /* If not already recovering from an error, report this error. */ if (!yyerrstatus) { ++yynerrs; #if YYERROR_VERBOSE yyn = yypact[yystate]; if (YYPACT_NINF < yyn && yyn < YYLAST) { YYSIZE_T yysize = 0; int yytype = YYTRANSLATE (yychar); char *yymsg; int yyx, yycount; yycount = 0; /* Start YYX at -YYN if negative to avoid negative indexes in YYCHECK. */ for (yyx = yyn < 0 ? -yyn : 0; yyx < (int) (sizeof (yytname) / sizeof (char *)); yyx++) if (yycheck[yyx + yyn] == yyx && yyx != YYTERROR) yysize += yystrlen (yytname[yyx]) + 15, yycount++; yysize += yystrlen ("parse error, unexpected ") + 1; yysize += yystrlen (yytname[yytype]); yymsg = (char *) YYSTACK_ALLOC (yysize); if (yymsg != 0) { char *yyp = yystpcpy (yymsg, "parse error, unexpected "); yyp = yystpcpy (yyp, yytname[yytype]); if (yycount < 5) { yycount = 0; for (yyx = yyn < 0 ? -yyn : 0; yyx < (int) (sizeof (yytname) / sizeof (char *)); yyx++) if (yycheck[yyx + yyn] == yyx && yyx != YYTERROR) { const char *yyq = ! yycount ? ", expecting " : " or "; yyp = yystpcpy (yyp, yyq); yyp = yystpcpy (yyp, yytname[yyx]); yycount++; } } yyerror (yymsg); YYSTACK_FREE (yymsg); } else yyerror ("parse error; also virtual memory exhausted"); } else #endif /* YYERROR_VERBOSE */ yyerror ("parse error"); } goto yyerrlab1; /*----------------------------------------------------. | yyerrlab1 -- error raised explicitly by an action. | `----------------------------------------------------*/ yyerrlab1: if (yyerrstatus == 3) { /* If just tried and failed to reuse lookahead token after an error, discard it. */ /* Return failure if at end of input. */ if (yychar == YYEOF) { /* Pop the error token. */ YYPOPSTACK; /* Pop the rest of the stack. */ while (yyssp > yyss) { YYDPRINTF ((stderr, "Error: popping ")); YYDSYMPRINT ((stderr, yystos[*yyssp], *yyvsp)); YYDPRINTF ((stderr, "\n")); yydestruct (yystos[*yyssp], *yyvsp); YYPOPSTACK; } YYABORT; } YYDPRINTF ((stderr, "Discarding token %d (%s).\n", yychar, yytname[yychar1])); yydestruct (yychar1, yylval); yychar = YYEMPTY; } /* Else will try to reuse lookahead token after shifting the error token. */ yyerrstatus = 3; /* Each real token shifted decrements this. */ for (;;) { yyn = yypact[yystate]; if (yyn != YYPACT_NINF) { yyn += YYTERROR; if (0 <= yyn && yyn <= YYLAST && yycheck[yyn] == YYTERROR) { yyn = yytable[yyn]; if (0 < yyn) break; } } /* Pop the current state because it cannot handle the error token. */ if (yyssp == yyss) YYABORT; YYDPRINTF ((stderr, "Error: popping ")); YYDSYMPRINT ((stderr, yystos[*yyssp], *yyvsp)); YYDPRINTF ((stderr, "\n")); yydestruct (yystos[yystate], *yyvsp); yyvsp--; yystate = *--yyssp; #if YYDEBUG if (yydebug) { short *yyssp1 = yyss - 1; YYFPRINTF (stderr, "Error: state stack now"); while (yyssp1 != yyssp) YYFPRINTF (stderr, " %d", *++yyssp1); YYFPRINTF (stderr, "\n"); } #endif } if (yyn == YYFINAL) YYACCEPT; YYDPRINTF ((stderr, "Shifting error token, ")); *++yyvsp = yylval; yystate = yyn; goto yynewstate; /*-------------------------------------. | yyacceptlab -- YYACCEPT comes here. | `-------------------------------------*/ yyacceptlab: yyresult = 0; goto yyreturn; /*-----------------------------------. | yyabortlab -- YYABORT comes here. | `-----------------------------------*/ yyabortlab: yyresult = 1; goto yyreturn; #ifndef yyoverflow /*----------------------------------------------. | yyoverflowlab -- parser overflow comes here. | `----------------------------------------------*/ yyoverflowlab: yyerror ("parser stack overflow"); yyresult = 2; /* Fall through. */ #endif yyreturn: #ifndef yyoverflow if (yyss != yyssa) YYSTACK_FREE (yyss); #endif return yyresult; } #line 5 "dangling" bisonc++-4.13.01/documentation/manual/algorithm/examples/rr1.tab.c0000644000175000017500000007034112633316117023626 0ustar frankfrank/* A Bison parser, made from rr1, by GNU bison 1.75. */ /* Skeleton parser for Yacc-like parsing with Bison, Copyright (C) 1984, 1989, 1990, 2000, 2001, 2002 Free Software Foundation, Inc. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /* As a special exception, when this file is copied by Bison into a Bison output file, you may use that output file without restriction. This special exception was added by the Free Software Foundation in version 1.24 of Bison. */ /* Written by Richard Stallman by simplifying the original so called ``semantic'' parser. */ /* All symbols defined below should begin with yy or YY, to avoid infringing on user name space. This should be done even for local variables, as they might otherwise be expanded by user macros. There are some unavoidable exceptions within include files to define necessary library symbols; they are noted "INFRINGES ON USER NAME SPACE" below. */ /* Identify Bison output. */ #define YYBISON 1 /* Pure parsers. */ #define YYPURE 0 /* Using locations. */ #define YYLSP_NEEDED 0 /* Tokens. */ #ifndef YYTOKENTYPE # define YYTOKENTYPE /* Put the tokens into the symbol table, so that GDB and other debuggers know about them. */ enum yytokentype { WORD = 258 }; #endif #define WORD 258 /* Copy the first part of user declarations. */ #line 1 "rr1" #define YYSSTYPE char * /* Enabling traces. */ #ifndef YYDEBUG # define YYDEBUG 0 #endif /* Enabling verbose error messages. */ #ifdef YYERROR_VERBOSE # undef YYERROR_VERBOSE # define YYERROR_VERBOSE 1 #else # define YYERROR_VERBOSE 0 #endif #ifndef YYSTYPE typedef int yystype; # define YYSTYPE yystype # define YYSTYPE_IS_TRIVIAL 1 #endif #ifndef YYLTYPE typedef struct yyltype { int first_line; int first_column; int last_line; int last_column; } yyltype; # define YYLTYPE yyltype # define YYLTYPE_IS_TRIVIAL 1 #endif /* Copy the second part of user declarations. */ /* Line 213 of /usr/share/bison/yacc.c. */ #line 103 "rr1.tab.c" #if ! defined (yyoverflow) || YYERROR_VERBOSE /* The parser invokes alloca or malloc; define the necessary symbols. */ # if YYSTACK_USE_ALLOCA # define YYSTACK_ALLOC alloca # else # ifndef YYSTACK_USE_ALLOCA # if defined (alloca) || defined (_ALLOCA_H) # define YYSTACK_ALLOC alloca # else # ifdef __GNUC__ # define YYSTACK_ALLOC __builtin_alloca # endif # endif # endif # endif # ifdef YYSTACK_ALLOC /* Pacify GCC's `empty if-body' warning. */ # define YYSTACK_FREE(Ptr) do { /* empty */; } while (0) # else # if defined (__STDC__) || defined (__cplusplus) # include /* INFRINGES ON USER NAME SPACE */ # define YYSIZE_T size_t # endif # define YYSTACK_ALLOC malloc # define YYSTACK_FREE free # endif #endif /* ! defined (yyoverflow) || YYERROR_VERBOSE */ #if (! defined (yyoverflow) \ && (! defined (__cplusplus) \ || (YYLTYPE_IS_TRIVIAL && YYSTYPE_IS_TRIVIAL))) /* A type that is properly aligned for any stack member. */ union yyalloc { short yyss; YYSTYPE yyvs; }; /* The size of the maximum gap between one aligned stack and the next. */ # define YYSTACK_GAP_MAX (sizeof (union yyalloc) - 1) /* The size of an array large to enough to hold all stacks, each with N elements. */ # define YYSTACK_BYTES(N) \ ((N) * (sizeof (short) + sizeof (YYSTYPE)) \ + YYSTACK_GAP_MAX) /* Copy COUNT objects from FROM to TO. The source and destination do not overlap. */ # ifndef YYCOPY # if 1 < __GNUC__ # define YYCOPY(To, From, Count) \ __builtin_memcpy (To, From, (Count) * sizeof (*(From))) # else # define YYCOPY(To, From, Count) \ do \ { \ register YYSIZE_T yyi; \ for (yyi = 0; yyi < (Count); yyi++) \ (To)[yyi] = (From)[yyi]; \ } \ while (0) # endif # endif /* Relocate STACK from its old location to the new one. The local variables YYSIZE and YYSTACKSIZE give the old and new number of elements in the stack, and YYPTR gives the new location of the stack. Advance YYPTR to a properly aligned location for the next stack. */ # define YYSTACK_RELOCATE(Stack) \ do \ { \ YYSIZE_T yynewbytes; \ YYCOPY (&yyptr->Stack, Stack, yysize); \ Stack = &yyptr->Stack; \ yynewbytes = yystacksize * sizeof (*Stack) + YYSTACK_GAP_MAX; \ yyptr += yynewbytes / sizeof (*yyptr); \ } \ while (0) #endif #if defined (__STDC__) || defined (__cplusplus) typedef signed char yysigned_char; #else typedef short yysigned_char; #endif /* YYFINAL -- State number of the termination state. */ #define YYFINAL 4 #define YYLAST 3 /* YYNTOKENS -- Number of terminals. */ #define YYNTOKENS 4 /* YYNNTS -- Number of nonterminals. */ #define YYNNTS 3 /* YYNRULES -- Number of rules. */ #define YYNRULES 6 /* YYNRULES -- Number of states. */ #define YYNSTATES 6 /* YYTRANSLATE(YYLEX) -- Bison symbol number corresponding to YYLEX. */ #define YYUNDEFTOK 2 #define YYMAXUTOK 258 #define YYTRANSLATE(X) \ ((size_t)(X) <= YYMAXUTOK ? yytranslate[X] : YYUNDEFTOK) /* YYTRANSLATE[YYLEX] -- Bison symbol number corresponding to YYLEX. */ static const size_t char yytranslate[] = { 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 3 }; #if YYDEBUG /* YYPRHS[YYN] -- Index of the first RHS symbol of rule number YYN in YYRHS. */ static const size_t char yyprhs[] = { 0, 0, 3, 4, 6, 9, 10 }; /* YYRHS -- A `-1'-separated list of the rules' RHS. */ static const yysigned_char yyrhs[] = { 5, 0, -1, -1, 6, -1, 5, 3, -1, -1, 3, -1 }; /* YYRLINE[YYN] -- source line where rule number YYN was defined. */ static const size_t char yyrline[] = { 0, 9, 9, 14, 16, 23, 28 }; #endif #if YYDEBUG || YYERROR_VERBOSE /* YYTNME[SYMBOL-NUM] -- String name of the symbol SYMBOL-NUM. First, the terminals, then, starting at YYNTOKENS, nonterminals. */ static const char *const yytname[] = { "$end", "error", "$undefined", "WORD", "$accept", "sequence", "maybeword", 0 }; #endif # ifdef YYPRINT /* YYTOKNUM[YYLEX-NUM] -- Internal token number corresponding to token YYLEX-NUM. */ static const size_t short yytoknum[] = { 0, 256, 257, 258 }; # endif /* YYR1[YYN] -- Symbol number of symbol that rule YYN derives. */ static const size_t char yyr1[] = { 0, 4, 5, 5, 5, 6, 6 }; /* YYR2[YYN] -- Number of symbols composing right hand side of rule YYN. */ static const size_t char yyr2[] = { 0, 2, 0, 1, 2, 0, 1 }; /* YYDEFACT[STATE-NAME] -- Default rule to reduce with in state STATE-NUM when YYTABLE doesn't specify something else to do. Zero means the default is an error. */ static const size_t char yydefact[] = { 2, 6, 0, 3, 1, 4 }; /* YYDEFGOTO[NTERM-NUM]. */ static const yysigned_char yydefgoto[] = { -1, 2, 3 }; /* YYPACT[STATE-NUM] -- Index in YYTABLE of the portion describing STATE-NUM. */ #define YYPACT_NINF -3 static const yysigned_char yypact[] = { -2, -3, 0, -3, -3, -3 }; /* YYPGOTO[NTERM-NUM]. */ static const yysigned_char yypgoto[] = { -3, -3, -3 }; /* YYTABLE[YYPACT[STATE-NUM]]. What to do in state STATE-NUM. If positive, shift that token. If negative, reduce the rule which number is the opposite. If zero, do what YYDEFACT says. If YYTABLE_NINF, parse error. */ #define YYTABLE_NINF -1 static const size_t char yytable[] = { 4, 1, 0, 5 }; static const yysigned_char yycheck[] = { 0, 3, -1, 3 }; /* YYSTOS[STATE-NUM] -- The (internal number of the) accessing symbol of state STATE-NUM. */ static const size_t char yystos[] = { 0, 3, 5, 6, 0, 3 }; #if ! defined (YYSIZE_T) && defined (__SIZE_TYPE__) # define YYSIZE_T __SIZE_TYPE__ #endif #if ! defined (YYSIZE_T) && defined (size_t) # define YYSIZE_T size_t #endif #if ! defined (YYSIZE_T) # if defined (__STDC__) || defined (__cplusplus) # include /* INFRINGES ON USER NAME SPACE */ # define YYSIZE_T size_t # endif #endif #if ! defined (YYSIZE_T) # define YYSIZE_T size_t int #endif #define yyerrok (yyerrstatus = 0) #define yyclearin (yychar = YYEMPTY) #define YYEMPTY -2 #define YYEOF 0 #define YYACCEPT goto yyacceptlab #define YYABORT goto yyabortlab #define YYERROR goto yyerrlab1 /* Like YYERROR except do call yyerror. This remains here temporarily to ease the transition to the new meaning of YYERROR, for GCC. Once GCC version 2 has supplanted version 1, this can go. */ #define YYFAIL goto yyerrlab #define YYRECOVERING() (!!yyerrstatus) #define YYBACKUP(Token, Value) \ do \ if (yychar == YYEMPTY && yylen == 1) \ { \ yychar = (Token); \ yylval = (Value); \ yychar1 = YYTRANSLATE (yychar); \ YYPOPSTACK; \ goto yybackup; \ } \ else \ { \ yyerror ("syntax error: cannot back up"); \ YYERROR; \ } \ while (0) #define YYTERROR 1 #define YYERRCODE 256 /* YYLLOC_DEFAULT -- Compute the default location (before the actions are run). */ #ifndef YYLLOC_DEFAULT # define YYLLOC_DEFAULT(Current, Rhs, N) \ Current.first_line = Rhs[1].first_line; \ Current.first_column = Rhs[1].first_column; \ Current.last_line = Rhs[N].last_line; \ Current.last_column = Rhs[N].last_column; #endif /* YYLEX -- calling `yylex' with the right arguments. */ #define YYLEX yylex () /* Enable debugging if requested. */ #if YYDEBUG # ifndef YYFPRINTF # include /* INFRINGES ON USER NAME SPACE */ # define YYFPRINTF fprintf # endif # define YYDPRINTF(Args) \ do { \ if (yydebug) \ YYFPRINTF Args; \ } while (0) # define YYDSYMPRINT(Args) \ do { \ if (yydebug) \ yysymprint Args; \ } while (0) /* Nonzero means print parse trace. It is left uninitialized so that multiple parsers can coexist. */ int yydebug; #else /* !YYDEBUG */ # define YYDPRINTF(Args) # define YYDSYMPRINT(Args) #endif /* !YYDEBUG */ /* YYINITDEPTH -- initial size of the parser's stacks. */ #ifndef YYINITDEPTH # define YYINITDEPTH 200 #endif /* YYMAXDEPTH -- maximum size the stacks can grow to (effective only if the built-in stack extension method is used). Do not make this value too large; the results are undefined if SIZE_MAX < YYSTACK_BYTES (YYMAXDEPTH) evaluated with infinite-precision integer arithmetic. */ #if YYMAXDEPTH == 0 # undef YYMAXDEPTH #endif #ifndef YYMAXDEPTH # define YYMAXDEPTH 10000 #endif #if YYERROR_VERBOSE # ifndef yystrlen # if defined (__GLIBC__) && defined (_STRING_H) # define yystrlen strlen # else /* Return the length of YYSTR. */ static YYSIZE_T # if defined (__STDC__) || defined (__cplusplus) yystrlen (const char *yystr) # else yystrlen (yystr) const char *yystr; # endif { register const char *yys = yystr; while (*yys++ != '\0') continue; return yys - yystr - 1; } # endif # endif # ifndef yystpcpy # if defined (__GLIBC__) && defined (_STRING_H) && defined (_GNU_SOURCE) # define yystpcpy stpcpy # else /* Copy YYSRC to YYDEST, returning the address of the terminating '\0' in YYDEST. */ static char * # if defined (__STDC__) || defined (__cplusplus) yystpcpy (char *yydest, const char *yysrc) # else yystpcpy (yydest, yysrc) char *yydest; const char *yysrc; # endif { register char *yyd = yydest; register const char *yys = yysrc; while ((*yyd++ = *yys++) != '\0') continue; return yyd - 1; } # endif # endif #endif /* !YYERROR_VERBOSE */ #if YYDEBUG /*-----------------------------. | Print this symbol on YYOUT. | `-----------------------------*/ static void #if defined (__STDC__) || defined (__cplusplus) yysymprint (FILE* yyout, int yytype, YYSTYPE yyvalue) #else yysymprint (yyout, yytype, yyvalue) FILE* yyout; int yytype; YYSTYPE yyvalue; #endif { /* Pacify ``unused variable'' warnings. */ (void) yyvalue; if (yytype < YYNTOKENS) { YYFPRINTF (yyout, "token %s (", yytname[yytype]); # ifdef YYPRINT YYPRINT (yyout, yytoknum[yytype], yyvalue); # endif } else YYFPRINTF (yyout, "nterm %s (", yytname[yytype]); switch (yytype) { default: break; } YYFPRINTF (yyout, ")"); } #endif /* YYDEBUG. */ /*-----------------------------------------------. | Release the memory associated to this symbol. | `-----------------------------------------------*/ static void #if defined (__STDC__) || defined (__cplusplus) yydestruct (int yytype, YYSTYPE yyvalue) #else yydestruct (yytype, yyvalue) int yytype; YYSTYPE yyvalue; #endif { /* Pacify ``unused variable'' warnings. */ (void) yyvalue; switch (yytype) { default: break; } } /* The user can define YYPARSE_PARAM as the name of an argument to be passed into yyparse. The argument should have type void *. It should actually point to an object. Grammar actions can access the variable by casting it to the proper pointer type. */ #ifdef YYPARSE_PARAM # if defined (__STDC__) || defined (__cplusplus) # define YYPARSE_PARAM_ARG void *YYPARSE_PARAM # define YYPARSE_PARAM_DECL # else # define YYPARSE_PARAM_ARG YYPARSE_PARAM # define YYPARSE_PARAM_DECL void *YYPARSE_PARAM; # endif #else /* !YYPARSE_PARAM */ # define YYPARSE_PARAM_ARG # define YYPARSE_PARAM_DECL #endif /* !YYPARSE_PARAM */ /* Prevent warning if -Wstrict-prototypes. */ #ifdef __GNUC__ # ifdef YYPARSE_PARAM int yyparse (void *); # else int yyparse (void); # endif #endif /* The lookahead symbol. */ int yychar; /* The semantic value of the lookahead symbol. */ YYSTYPE yylval; /* Number of parse errors so far. */ int yynerrs; int yyparse (YYPARSE_PARAM_ARG) YYPARSE_PARAM_DECL { register int yystate; register int yyn; int yyresult; /* Number of tokens to shift before error messages enabled. */ int yyerrstatus; /* Lookahead token as an internal (translated) token number. */ int yychar1 = 0; /* Three stacks and their tools: `yyss': related to states, `yyvs': related to semantic values, `yyls': related to locations. Refer to the stacks thru separate pointers, to allow yyoverflow to reallocate them elsewhere. */ /* The state stack. */ short yyssa[YYINITDEPTH]; short *yyss = yyssa; register short *yyssp; /* The semantic value stack. */ YYSTYPE yyvsa[YYINITDEPTH]; YYSTYPE *yyvs = yyvsa; register YYSTYPE *yyvsp; #define YYPOPSTACK (yyvsp--, yyssp--) YYSIZE_T yystacksize = YYINITDEPTH; /* The variables used to return semantic value and location from the action routines. */ YYSTYPE yyval; /* When reducing, the number of symbols on the RHS of the reduced rule. */ int yylen; YYDPRINTF ((stderr, "Starting parse\n")); yystate = 0; yyerrstatus = 0; yynerrs = 0; yychar = YYEMPTY; /* Cause a token to be read. */ /* Initialize stack pointers. Waste one element of value and location stack so that they stay on the same level as the state stack. The wasted elements are never initialized. */ yyssp = yyss; yyvsp = yyvs; goto yysetstate; /*------------------------------------------------------------. | yynewstate -- Push a new state, which is found in yystate. | `------------------------------------------------------------*/ yynewstate: /* In all cases, when you get here, the value and location stacks have just been pushed. so pushing a state here evens the stacks. */ yyssp++; yysetstate: *yyssp = yystate; if (yyssp >= yyss + yystacksize - 1) { /* Get the current used size of the three stacks, in elements. */ YYSIZE_T yysize = yyssp - yyss + 1; #ifdef yyoverflow { /* Give user a chance to reallocate the stack. Use copies of these so that the &'s don't force the real ones into memory. */ YYSTYPE *yyvs1 = yyvs; short *yyss1 = yyss; /* Each stack pointer address is followed by the size of the data in use in that stack, in bytes. This used to be a conditional around just the two extra args, but that might be undefined if yyoverflow is a macro. */ yyoverflow ("parser stack overflow", &yyss1, yysize * sizeof (*yyssp), &yyvs1, yysize * sizeof (*yyvsp), &yystacksize); yyss = yyss1; yyvs = yyvs1; } #else /* no yyoverflow */ # ifndef YYSTACK_RELOCATE goto yyoverflowlab; # else /* Extend the stack our own way. */ if (yystacksize >= YYMAXDEPTH) goto yyoverflowlab; yystacksize *= 2; if (yystacksize > YYMAXDEPTH) yystacksize = YYMAXDEPTH; { short *yyss1 = yyss; union yyalloc *yyptr = (union yyalloc *) YYSTACK_ALLOC (YYSTACK_BYTES (yystacksize)); if (! yyptr) goto yyoverflowlab; YYSTACK_RELOCATE (yyss); YYSTACK_RELOCATE (yyvs); # undef YYSTACK_RELOCATE if (yyss1 != yyssa) YYSTACK_FREE (yyss1); } # endif #endif /* no yyoverflow */ yyssp = yyss + yysize - 1; yyvsp = yyvs + yysize - 1; YYDPRINTF ((stderr, "Stack size increased to %lu\n", (size_t long int) yystacksize)); if (yyssp >= yyss + yystacksize - 1) YYABORT; } YYDPRINTF ((stderr, "Entering state %d\n", yystate)); goto yybackup; /*-----------. | yybackup. | `-----------*/ yybackup: /* Do appropriate processing given the current state. */ /* Read a lookahead token if we need one and don't already have one. */ /* yyresume: */ /* First try to decide what to do without reference to lookahead token. */ yyn = yypact[yystate]; if (yyn == YYPACT_NINF) goto yydefault; /* Not known => get a lookahead token if don't already have one. */ /* yychar is either YYEMPTY or YYEOF or a valid token in external form. */ if (yychar == YYEMPTY) { YYDPRINTF ((stderr, "Reading a token: ")); yychar = YYLEX; } /* Convert token to internal form (in yychar1) for indexing tables with. */ if (yychar <= 0) /* This means end of input. */ { yychar1 = 0; yychar = YYEOF; /* Don't call YYLEX any more. */ YYDPRINTF ((stderr, "Now at end of input.\n")); } else { yychar1 = YYTRANSLATE (yychar); /* We have to keep this `#if YYDEBUG', since we use variables which are defined only if `YYDEBUG' is set. */ YYDPRINTF ((stderr, "Next token is ")); YYDSYMPRINT ((stderr, yychar1, yylval)); YYDPRINTF ((stderr, "\n")); } /* If the proper action on seeing token YYCHAR1 is to reduce or to detect an error, take that action. */ yyn += yychar1; if (yyn < 0 || YYLAST < yyn || yycheck[yyn] != yychar1) goto yydefault; yyn = yytable[yyn]; if (yyn <= 0) { if (yyn == 0 || yyn == YYTABLE_NINF) goto yyerrlab; yyn = -yyn; goto yyreduce; } if (yyn == YYFINAL) YYACCEPT; /* Shift the lookahead token. */ YYDPRINTF ((stderr, "Shifting token %d (%s), ", yychar, yytname[yychar1])); /* Discard the token being shifted unless it is eof. */ if (yychar != YYEOF) yychar = YYEMPTY; *++yyvsp = yylval; /* Count tokens shifted since error; after three, turn off error status. */ if (yyerrstatus) yyerrstatus--; yystate = yyn; goto yynewstate; /*-----------------------------------------------------------. | yydefault -- do the default action for the current state. | `-----------------------------------------------------------*/ yydefault: yyn = yydefact[yystate]; if (yyn == 0) goto yyerrlab; goto yyreduce; /*-----------------------------. | yyreduce -- Do a reduction. | `-----------------------------*/ yyreduce: /* yyn is the number of a rule to reduce with. */ yylen = yyr2[yyn]; /* If YYLEN is nonzero, implement the default value of the action: `$$ = $1'. Otherwise, the following line sets YYVAL to garbage. This behavior is undocumented and Bison users should not rely upon it. Assigning to YYVAL unconditionally makes the parser a bit smaller, and it avoids a GCC warning that YYVAL may be used uninitialized. */ yyval = yyvsp[1-yylen]; #if YYDEBUG /* We have to keep this `#if YYDEBUG', since we use variables which are defined only if `YYDEBUG' is set. */ if (yydebug) { int yyi; YYFPRINTF (stderr, "Reducing via rule %d (line %d), ", yyn - 1, yyrline[yyn]); /* Print the symbols being reduced, and their result. */ for (yyi = yyprhs[yyn]; yyrhs[yyi] >= 0; yyi++) YYFPRINTF (stderr, "%s ", yytname[yyrhs[yyi]]); YYFPRINTF (stderr, " -> %s\n", yytname[yyr1[yyn]]); } #endif switch (yyn) { case 2: #line 11 "rr1" { cout << "empty sequence\n"; } break; case 4: #line 18 "rr1" { cout << "added word " << yyvsp[0] << endl; } break; case 5: #line 25 "rr1" { cout << "empty maybeword\n"; } break; case 6: #line 30 "rr1" { cout << "single word " << yyvsp[0] << endl; } break; } /* Line 1016 of /usr/share/bison/yacc.c. */ #line 931 "rr1.tab.c" yyvsp -= yylen; yyssp -= yylen; #if YYDEBUG if (yydebug) { short *yyssp1 = yyss - 1; YYFPRINTF (stderr, "state stack now"); while (yyssp1 != yyssp) YYFPRINTF (stderr, " %d", *++yyssp1); YYFPRINTF (stderr, "\n"); } #endif *++yyvsp = yyval; /* Now `shift' the result of the reduction. Determine what state that goes to, based on the state we popped back to and the rule number reduced by. */ yyn = yyr1[yyn]; yystate = yypgoto[yyn - YYNTOKENS] + *yyssp; if (0 <= yystate && yystate <= YYLAST && yycheck[yystate] == *yyssp) yystate = yytable[yystate]; else yystate = yydefgoto[yyn - YYNTOKENS]; goto yynewstate; /*------------------------------------. | yyerrlab -- here on detecting error | `------------------------------------*/ yyerrlab: /* If not already recovering from an error, report this error. */ if (!yyerrstatus) { ++yynerrs; #if YYERROR_VERBOSE yyn = yypact[yystate]; if (YYPACT_NINF < yyn && yyn < YYLAST) { YYSIZE_T yysize = 0; int yytype = YYTRANSLATE (yychar); char *yymsg; int yyx, yycount; yycount = 0; /* Start YYX at -YYN if negative to avoid negative indexes in YYCHECK. */ for (yyx = yyn < 0 ? -yyn : 0; yyx < (int) (sizeof (yytname) / sizeof (char *)); yyx++) if (yycheck[yyx + yyn] == yyx && yyx != YYTERROR) yysize += yystrlen (yytname[yyx]) + 15, yycount++; yysize += yystrlen ("parse error, unexpected ") + 1; yysize += yystrlen (yytname[yytype]); yymsg = (char *) YYSTACK_ALLOC (yysize); if (yymsg != 0) { char *yyp = yystpcpy (yymsg, "parse error, unexpected "); yyp = yystpcpy (yyp, yytname[yytype]); if (yycount < 5) { yycount = 0; for (yyx = yyn < 0 ? -yyn : 0; yyx < (int) (sizeof (yytname) / sizeof (char *)); yyx++) if (yycheck[yyx + yyn] == yyx && yyx != YYTERROR) { const char *yyq = ! yycount ? ", expecting " : " or "; yyp = yystpcpy (yyp, yyq); yyp = yystpcpy (yyp, yytname[yyx]); yycount++; } } yyerror (yymsg); YYSTACK_FREE (yymsg); } else yyerror ("parse error; also virtual memory exhausted"); } else #endif /* YYERROR_VERBOSE */ yyerror ("parse error"); } goto yyerrlab1; /*----------------------------------------------------. | yyerrlab1 -- error raised explicitly by an action. | `----------------------------------------------------*/ yyerrlab1: if (yyerrstatus == 3) { /* If just tried and failed to reuse lookahead token after an error, discard it. */ /* Return failure if at end of input. */ if (yychar == YYEOF) { /* Pop the error token. */ YYPOPSTACK; /* Pop the rest of the stack. */ while (yyssp > yyss) { YYDPRINTF ((stderr, "Error: popping ")); YYDSYMPRINT ((stderr, yystos[*yyssp], *yyvsp)); YYDPRINTF ((stderr, "\n")); yydestruct (yystos[*yyssp], *yyvsp); YYPOPSTACK; } YYABORT; } YYDPRINTF ((stderr, "Discarding token %d (%s).\n", yychar, yytname[yychar1])); yydestruct (yychar1, yylval); yychar = YYEMPTY; } /* Else will try to reuse lookahead token after shifting the error token. */ yyerrstatus = 3; /* Each real token shifted decrements this. */ for (;;) { yyn = yypact[yystate]; if (yyn != YYPACT_NINF) { yyn += YYTERROR; if (0 <= yyn && yyn <= YYLAST && yycheck[yyn] == YYTERROR) { yyn = yytable[yyn]; if (0 < yyn) break; } } /* Pop the current state because it cannot handle the error token. */ if (yyssp == yyss) YYABORT; YYDPRINTF ((stderr, "Error: popping ")); YYDSYMPRINT ((stderr, yystos[*yyssp], *yyvsp)); YYDPRINTF ((stderr, "\n")); yydestruct (yystos[yystate], *yyvsp); yyvsp--; yystate = *--yyssp; #if YYDEBUG if (yydebug) { short *yyssp1 = yyss - 1; YYFPRINTF (stderr, "Error: state stack now"); while (yyssp1 != yyssp) YYFPRINTF (stderr, " %d", *++yyssp1); YYFPRINTF (stderr, "\n"); } #endif } if (yyn == YYFINAL) YYACCEPT; YYDPRINTF ((stderr, "Shifting error token, ")); *++yyvsp = yylval; yystate = yyn; goto yynewstate; /*-------------------------------------. | yyacceptlab -- YYACCEPT comes here. | `-------------------------------------*/ yyacceptlab: yyresult = 0; goto yyreturn; /*-----------------------------------. | yyabortlab -- YYABORT comes here. | `-----------------------------------*/ yyabortlab: yyresult = 1; goto yyreturn; #ifndef yyoverflow /*----------------------------------------------. | yyoverflowlab -- parser overflow comes here. | `----------------------------------------------*/ yyoverflowlab: yyerror ("parser stack overflow"); yyresult = 2; /* Fall through. */ #endif yyreturn: #ifndef yyoverflow if (yyss != yyssa) YYSTACK_FREE (yyss); #endif return yyresult; } #line 9 "rr1" bisonc++-4.13.01/documentation/manual/algorithm/examples/dangling0000644000175000017500000000016112633316117023710 0ustar frankfrank%token IF ELSE VAR %% stmt: VAR ';' | IF '(' VAR ')' stmt | IF '(' VAR ')' stmt ELSE stmt ; bisonc++-4.13.01/documentation/manual/algorithm/examples/Parser.h0000644000175000017500000000120112633316117023603 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // for error()'s inline implementation #include // $insert baseclass #include "Parserbase.h" #undef Parser class Parser: public ParserBase { public: int parse(); private: void error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex int lex(); void print() // d_token, d_loc {} // support functions for parse(): void executeAction(int d_production); size_t errorRecovery(); int lookup(int token); int nextToken(); }; #endif bisonc++-4.13.01/documentation/manual/algorithm/examples/peculiar0000644000175000017500000000042212633316117023731 0ustar frankfrank %token ID %left '-' %left '*' %right UNARY %% expr: expr '-' term | term ; term: term '*' factor | factor ; factor: '-' expr %prec UNARY | ID ; bisonc++-4.13.01/documentation/manual/algorithm/examples/noshiftreduce0000644000175000017500000000017112633316117024770 0ustar frankfrank%token ID %left '*' %right '-' %% term: term '*' term | ID | '-' term ; bisonc++-4.13.01/documentation/manual/algorithm/ruleprec.yo0000644000175000017500000000467612633316117022575 0ustar frankfrankConsider the following (somewhat peculiar) grammar: verbinclude(examples/peculiar) Even though operator precedence and association rules are used the grammar still displays a shift/reduce conflict. One of the grammar's states consists of the following two items: verb( 0: expr -> term . 1: term -> term . '*' factor ) and b() reduces to item 0, dropping item 1 rather than shifting a tt('*') and proceeding with item 0. When considering states where shift/reduce conflicts are encountered the `shiftable' items of these states shift when encountering terminal tokens that are also in the follow sets of the reducible items of these states. In the above example item 1 shifts when tt('*') is encountered, but tt('*') is also an element of the set of look-ahead tokens of item 0. B() must now decide what to do. In cases we've seen earlier b() could make the decision because the reducible item itself had a well known precedence. The precedence of a reducible item is defined as the precedence of the left-hand side non-terminal of the production rule to which the reducible item belongs. Item 0 in the above example is an item of the rule tt(expr -> term). The precedence of a production rule is defined as follows: itemization( it() If tt(%prec) is used then the precedence of the production rule is equal to the precedence of the terminal that is specified with the tt(%prec) directive; it() If tt(%prec) is not used then the production rule's precedence is equal to the precedence of the first terminal token that is used in the production rule; it() In all other cases the production rule's precedence is set to the maximum possible precedence. ) Since tt(expr -> term) does not contain a terminal token and does not use tt(%prec), its precedence is the maximum possible precedence. Consequently in the above state the shift/reduce conflict is solved by em(reducing) rather than shifting. Some final remark as to why the above grammar is peculiar. It is peculiar as it combines precedence and association specifying directives with auxiliary non-terminals that may be useful conceptually (or when implementing an expression parser `by hand') but which are not required when defining grammars for b(). The following grammar does not use tt(term) and tt(factor) but recognizes the same grammar as the above `peculiar' grammar without reporting any shift/reduce conflict: verbinclude(examples/notpeculiar) bisonc++-4.13.01/documentation/manual/algorithm/whenprec.yo0000644000175000017500000000344112633316117022554 0ustar frankfrankConsider the following ambiguous grammar fragment (ambiguous because the input `tt(1 - 2 * 3)' can be parsed in two different ways): verb( expr: expr '-' expr | expr '*' expr | expr '<' expr | '(' expr ')' ... ; ) Suppose the parser has seen the tokens `tt(1)', `tt(-') and `tt(2)'; should it reduce them via the rule for the addition operator? It depends on the next token. Of course, if the next token is `CLOSEPAR', we must reduce; shifting is invalid because no single rule can reduce the token sequence `tt(- 2) CLOSEPAR' or anything starting with that. But if the next token is `tt(*)' or `tt(<)', we have a choice: either shifting or reduction would allow the parse to complete, but with different results. To decide which one b() should do, we must consider the results. If the next operator token tt(op) is shifted, then it must be reduced first in order to permit another opportunity to reduce the sum. The result is (in effect) `tt(1 - (2 op 3))'. On the other hand, if the subtraction is reduced before shifting tt(op), the result is `tt((1 - 2) op 3)'. Clearly, then, the choice of shift or reduce should depend on the relative precedence of the operators `tt(-)' and tt(op): `tt(*)' should be shifted first, but not `tt(<)'. What about input such as `tt(1 - 2 - 5)'; should this be `tt((1 - 2) - 5)' or should it be `tt(1 - (2 - 5))'? For most operators we prefer the former, which is called em(left association). The latter alternative, em(right association), is desirable for, e.g., assignment operators. The choice of left or right association is a matter of whether the parser chooses to shift or reduce when the stack contains `tt(1 - 2)' and the look-ahead token is `tt(-)': shifting results in right-associativity. bisonc++-4.13.01/documentation/manual/algorithm/states.yo0000644000175000017500000001243512633316117022247 0ustar frankfrankHaving determined the tt(FIRST) set, b() determines the em(states) of the grammar. The analysis starts at the augmented grammar rule and proceeds until all possible states have been determined. In this analysis the concept of the em(dot) symbol is used. The em(dot) shows the position we are at when analyzing production rules defined by a grammar. Using the provided example grammar the analysis proceeds as follows: itemization( it() State 0: tt(start_$ -> . start)nl() At this point we haven't seen anything yet. The em(dot) is before the grammar's start symbol. The above is called an em(item) and the initial set of items of a state is called the set of em(kernel items). Except for the start rule, kernel items never have a dot before the very first symbol of a rule. In this particular state there's only one kernel item. Items are indexed, so this item receives index 0. Beyond, item indices are shown together with the items themselves. If a non-terminal follows immediately to the right of the dot, then all production rules of that non-terminal are added to the state as non-kernel items. Non-kernel items always have their dots at the very first position. Adding non-kernel items is a recursive process. If the rules of thus added items also show non-terminals to the right of the dot, then the production rules of those non-terminals are added too (unless they were already added). The above kernel item results in the addition of the following non-kernel items: itemization( it() item 1: tt(start -> . start expr) it() item 2: tt(start -> . ) ) From each of the items new states may be derived. New states are reached when the symbol to the right of the dot has been recognized. In that case a em(transition) (a em(goto)) to the next state takes place, where the dot has moved one postition to the right, defining a kernel item of the new state. Once the dot has reached the end of the rule, a em(reduction) may take place. Following a reduction a transition based on the em(Left Hand Side) (tt(LHS)) of the reduced production rule is performed. This procedure is discussed in more detail in section ref(PARSING). Looking at the current state's items, two actions are possible: itemization( it() On tt(start), to a state in which tt(start) has been seen (state 1) it() By default, a reduction by the rule tt(start -> . ) ) it() State 1: kernel items: itemization( it() item 0: tt(start_$ -> start .) it() item 1: tt(start -> start . expr) ) Since tt(expr) is a non-terminal to the right of the dot, we add all tt(expr) rules as this state's non-kernel items: itemization( it() item 2: tt(expr -> . NR) it() item 3: tt(expr -> . expr '+' expr) ) This state becomes the em(accepting) state: if EOF is reached in this state, the tt(start_$) rule has been recognized, and so the input was syntactically correct. But in this state transitions to other states are also possible: itemization( it() On tt(expr) to state 2 it() On NR to state 3 ) it() State 2: kernel items: itemization( it() item 0: tt(start -> start expr .) it() item 1: tt(expr -> expr . '+' expr) ) No non-terminal symbols appear to the right of the dots in these items, so no non-kernel items are added to this state. Transitions from this state are: itemization( it() On tt('+') to state 4 it() Or reduce to tt(start) according to its first item (removing two elements from the parser's stack). ) it() State 3: kernel items: itemization( it() item 0: tt(expr -> NR .) ) In this state only one action is possible: a reduction to tt(expr) (removing one element from the parser's stack). it() State 4: kernel item: itemization( it() item 0: tt(expr -> expr '+' . expr) ) Since tt(expr) is a non-terminal to the right of the dot, we add all tt(expr) rules as this state's non-kernel items: itemization( it() item 1: tt(expr -> . NR) it() item 2: tt(expr -> . expr '+' expr) ) In this state the following transitions are possible: itemization( it() On tt(expr) to state 5 it() On tt(NR) we reach the situation tt(expr -> NR .) which has already been encountered at state 3. That's OK, so on tt(NR) there is a transition to state 3. ) Note that in order to return to a previously defined state that state must have exactly the required kernel items. So, if state 3 would contain multiple kernel items, a new state would have been required, merely having the tt(expr -> NR .) kernel item. it() State 5: kernel items: itemization( it() item 0: tt(expr -> expr '+' expr .) it() item 1: tt(expr -> expr . '+' expr) ) In this state two actions are possible: itemization( it() On tt('+') to state 4 it() Or reduce to tt(expr) according to its first item (removing three elements from the parser's stack). ) As explained in the next section this state's first action is never selected: in this state only the reduction is selected. ) bisonc++-4.13.01/documentation/manual/algorithm/reduce.yo0000644000175000017500000000552612633316117022216 0ustar frankfrankA em(reduce/reduce conflict) occurs if there are two or more rules that apply to the same sequence of input. This usually indicates a serious error in the grammar. For example, here is an erroneous attempt to define a sequence of zero or more word groupings: verbinsert(-as4)(examples/rr1) The error is an ambiguity: there is more than one way to parse a single word into a sequence. It could be reduced to a maybeword and then into a sequence via the second rule. Alternatively, nothing-at-all could be reduced into a sequence via the first rule, and this could be combined with the word using the third rule for sequence. There is also more than one way to reduce nothing-at-all into a sequence. This can be done directly via the first rule, or indirectly via maybeword and then the second rule. You might think that this is a distinction without a difference, because it does not change whether any particular input is valid or not. But it does affect which actions are run. One parsing order runs the second rule's action; the other runs the first rule's action and the third rule's action. In this example, the output of the program changes. B() resolves a reduce/reduce conflict by choosing to use the rule that appears first in the grammar, but it is very risky to rely on this. Every reduce/reduce conflict must be studied and usually eliminated. Here is the proper way to define sequence: verb( sequence: // empty { printf ("empty sequence\n"); } | sequence word { printf ("added word %s\n", $2); } ; ) Here is another common error that yields a reduce/reduce conflict: verb( sequence: // empty | sequence words | sequence redirects ; words: // empty | words word ; redirects: // empty | redirects redirect ; ) The intention here is to define a sequence which can contain either word or redirect groupings. The individual definitions of sequence, words and redirects are error-free, but the three together make a subtle ambiguity: even an empty input can be parsed in infinitely many ways! Consider: nothing-at-all could be a words. Or it could be two words in a row, or three, or any number. It could equally well be a redirects, or two, or any number. Or it could be a words followed by three redirects and another words. And so on. Here are two ways to correct these rules. First, to make it a single level of sequence: verb( sequence: // empty | sequence word | sequence redirect ; ) Second, to prevent either a words or a redirects from being empty: verb( sequence: // empty | sequence words | sequence redirects ; words: word | words word ; redirects: redirect | redirects redirect ; ) bisonc++-4.13.01/documentation/manual/algorithm/transition.yo0000644000175000017500000000630312633316117023133 0ustar frankfrankOnce b() has successfully analyzed the grammar it generates the tables that are used by the parsing function to parse input according to the provided grammar. Each state results in a em(state transition table). For the example grammar used so far there are five states. Each table consists of rows having two elements. The meaning of the elements depends on their position in the table. itemization( it() For the em(first) row, itemization( it() the first element indicates the em(type) of the state. The following types are recognized: table(2)(ll)( row(cell(NORMAL)cell(Despite its name, it's not used)) row(cell(ERR_ITEM)cell(The state allows error recovery)) row(cell(REQ_TOKEN)cell(The state requires a token nl() (which may already be available))) row(cell(ERR_REQ)cell(combines ERR_ITEM and REQ_TOKEN)) row(cell(DEF_RED)cell(This state has a default reduction)) row(cell(ERR_DEF)cell(combines ERR_ITEM and DEF_RED)) row(cell(REQ_DEF)cell(combines REQ_TOKEN and DEF_RED)) row(cell(ERR_REQ_DEF)cell(combines ERR_ITEM, REQ_TOKEN and DEF_RED)) ) it() the second element indicates the index of the table's last element. ) it() For the em(last) row, itemization( it() the first element stores the current token (it is not used when the option tt(--thread-safe) was specified) it() the second element defines the action to perform. A positive value indicates a shift to the indicated state; a negative value a reduction according to the indicated rule number, disregarding its sign (note that it's rule em(number), rather than rule em(offset); zero indicates the input is accepted as correct according to the parser's grammar. ) it() For all intermediate remaining rows: the first element stores the value of a required token for the action specified in the second element, similar to the way an action is specified in the last row. Symbolic values (like tt(PARSE_ACCEPT) rather than 0) may be used as well. ) ) Here are the tables defining the five states of the example grammar as they are generated by b() in the file containing the parsing function: verb( SR__ s_0[] = { { { DEF_RED}, { 2} }, { { 258}, { 1} }, // start { { 0}, { -2} }, }; SR__ s_1[] = { { { REQ_TOKEN}, { 4} }, { { 259}, { 2} }, // expr { { 257}, { 3} }, // NR { { _EOF_}, { PARSE_ACCEPT} }, { { 0}, { 0} }, }; SR__ s_2[] = { { { REQ_DEF}, { 2} }, { { 43}, { 4} }, // '+' { { 0}, { -1} }, }; SR__ s_3[] = { { { DEF_RED}, { 1} }, { { 0}, { -3} }, }; SR__ s_4[] = { { { REQ_TOKEN}, { 3} }, { { 259}, { 5} }, // expr { { 257}, { 3} }, // NR { { 0}, { 0} }, }; SR__ s_5[] = { { { REQ_DEF}, { 1} }, { { 0}, { -4} }, }; ) bisonc++-4.13.01/documentation/manual/algorithm/first.yo0000644000175000017500000000407312633316117022072 0ustar frankfrankThe tt(FIRST) set defines all terminal tokens that can be encountered when beginning to recognize a grammatical symbol. For each grammatical symbol (terminal and nonterminal) a tt(FIRST) set can be determined as follows: itemization( it() The tt(FIRST) set of a terminal symbol is the symbol itself. it() The tt(FIRST) set of an empty alternative is the empty set. The empty set is indicated by epsilon() and is considered an actual element of the tt(FIRST) set (So, a tt(FIRST) set could contain two elements: tt('+') em(and) epsilon()). it() If X has a production rule tt(X: X1 X2 X3..., Xi, ...Xn), then initialize fst(X) to empty (i.e., not even holding epsilon()). Then, for each Xi (1..n): itemization( it() add fst(Xi) to fst(X) it() stop when fst(Xi) does not contain epsilon() ) If fst(Xn) does not contain epsilon() remove epsilon() from fst(X) (unless analyzing another production rule) epsilon() is already part of fst(X). ) When starting this algorithm, only the nonterminals need to be considered. Also, required tt(FIRST) sets may not yet be available. Therefore the above algorithm iterates over all nonterminals until no changes were observed. In the algorithm tt($) is not considered. Applying the above algorithm to the rules of our grammar we get: table(4)(llll)( rowline() fstrow(nonterminal)(rule)(FIRST set) rowline() fstrow(tt(start_$)) (tt(start)) (not yet available) fstrow(tt(start)) (tt(start expr)) (not yet available) fstrow(tt(start)) (tt(// empty)) (epsilon()) fstrow(tt(expr)) (tt(NR)) (tt(NR)) fstrow(tt(expr)) (tt(expr '+' expr)) (tt(NR)) rowline() row(cell(changes in the next cycle:)) fstrow(tt(start)) (tt(start expr)) (tt(NR) epsilon()) fstrow(tt(start)) (tt(// empty)) (tt(NR) epsilon()) rowline() row(cell(changes in the next cycle:)) fstrow(tt(start_$)) (tt(start)) (tt(NR) epsilon()) rowline() row(cell(no further changes)) ) bisonc++-4.13.01/documentation/manual/algorithm/input.yo0000644000175000017500000001535212633316117022104 0ustar frankfrankB() implements the parsing function in the member function tt(parse()). This function obtains its tokens from the member tt(lex()) and processes all tokens until a syntactic error, a non-recoverable error, or the end of input is encountered. The algorithm used by tt(parse()) is the same, irrespective of the used grammar. In fact, the tt(parse()) member's behavior is completely determined by the tables generated by b(). The parsing algorithm is known as the em(shift-reduce) (S/R) algorithm, and it allows tt(parse()) to perform two actions while processing series of tokens: itemization( it() When a token is received in a state in which that token is required for a transition to another state (e.g., a tt(NR) token is observed in state 1 of the example's grammar) a transition to state 3 is performed. it() When a state is reached which calls for a (default) reduction (e.g., state 3 of the example's grammar) a reduction is performed. ) The parsing function maintains two stacks, which are manipulated by the above two actions: a state stack and a value stack. These stacks are not accessible to the parser: they are private data structures defined in the parser's base class. The parsing member tt(parse()) may use the following member functions to manipulate these stacks: itemization( itt(push__(stateIdx)) pushes tt(stateIdx) on the state stack and pushes the current semantic value (i.e., tt(LTYPE_ d_val__)) on the value stack; itt(pop__(size_t count = 1)) removes tt(count) elements from the two stacks; itt(top__()) returns the state currently on top of the state stack; ) Apart from the state- and semantic stacks, the S/R algorithm itself sometimes needs to push a token on a two-element stack. Rather than using a formal stack, two variables (tt(d_token__) and tt(d_nextToken__)) are used to implement this little token-stack. The member function tt(pushToken__()) pushes a new value on the token stack, the member tt(popToken__()) pops a previously pushed value from the token stack. At any time, tt(d_token__) contains the topmost element of the token stack. The member tt(nextToken()) determines the next token to be processed. If the token stack contains a value it is returned. Otherwise, tt(lex()) is called to obtain the next token to be pushed on the token stack. The member tt(lookup()) looks up the current token in the current state's tt(SR__) table. For this a simple linear search algorithm is used. If searching fails to find an action for the token an tt(UNEXPECTED_TOKEN__) exception is thrown, which starts the error recovery. If an action was found, it is returned. Rules may have actions associated with them. These actions are executed when a grammatical rule has been completely recognized. This is always at the end of a rule: mid-rule actions are converted by b() into pseudo nonterminals, replacing mid-rule action blocks by these pseudo nonterminals. The pseudo nonterminals show up in the verbose grammar output as rules having LHSs starting with tt(#). So, once a rule has been recognized its action (if defined) is executed. For this the member function tt(executeAction()) is available. Finally, the token stack can be cleared using the member tt(clearin()). Now that the relevant support functions have been introduced, the S/R algorithm itself turns out to be a fairly simple algorithm. First, the parser's stack is initialized with state 0 and the token stack is cleared. Then, in a never ending loop: itemization( it() If a state needs a token (i.e., tt(REQ_TOKEN) has been specified for that state), tt(nextToken()) is called to obtain the next token; it() From the token and the current state tt(lookup()) determines the next action; it() If a shifting action was called for the next state is pushed on the stack and the token is popped off the token stack. it() If a reduction was called for that rule's action block is executed followed by a reduction of the production rule (performed by tt(reduce__())): the semantic and state stacks are reduced by the number of elements found in that production rule, and the production rule's LHS is pushed on the token stack it() If the state/token combination indicates that the input is accepted (normally: when tt(EOF) is encountered in state 1) then the parsing function terminates, returning 0. ) The following table shows the S/R algorithm in action when the example grammar is given the input tt(3 + 4 + 5). The first column shows the (remaining) input, the second column the current token stack (with tt(-) indicating an empty token stack), the third column the state stack. The fourth column provides a short description. The leftmost elements of the stacks represent the tops of the stacks. The information shown below is also (in more elaborate form) shown when the tt(--debug) option is provided to B() when generating the parsing function. table(7)(rllrlll)( rowline() row(cell(remaining input)cell( )cell(token stack)cell( )cell(state stack) cell( )cell(description)) rowline() ttrow(3 + 4 + 5) (-) ( 0) (initialization) ttrow(3 + 4 + 5) (start) ( 0) (reduction by rule 2) ttrow(3 + 4 + 5) (-) ( 1 0) (shift `start') ttrow( + 4 + 5) (NR) ( 1 0) (obtain NR token) ttrow( + 4 + 5) (-) ( 3 1 0) (shift NR) ttrow( + 4 + 5) (expr) ( 1 0) (reduction by rule 3) ttrow( + 4 + 5) (-) ( 2 1 0) (shift `expr') ttrow( 4 + 5) (+) ( 2 1 0) (obtain `+' token) ttrow( 4 + 5) (-) ( 4 2 1 0) (shift `+') ttrow( + 5) (NR) ( 4 2 1 0) (obtain NR token) ttrow( + 5) (-) (3 4 2 1 0) (shift NR) ttrow( + 5) (expr) ( 4 3 1 0) (reduction by rule 3) ttrow( + 5) (-) (5 4 3 1 0) (shift `expr') ttrow( 5) (+) (5 4 3 1 0) (obtain `+' token) ttrow( 5) (expr +) ( 1 0) (reduction by rule 4) ttrow( 5) (+) ( 2 1 0) (shift `expr') ttrow( 5) (-) ( 4 2 1 0) (shift '+') ttrow( ) (NR) ( 4 2 1 0) (obtain NR token) ttrow( ) (-) (3 4 2 1 0) (shift NR) ttrow( ) (expr) ( 4 2 1 0) (reduction by rule 3) ttrow( ) (-) (5 4 2 1 0) (shift `expr') ttrow( ) (EOF) (5 4 2 1 0) (obtain EOF) ttrow( ) (expr EOF) ( 1 0) (reduction by rule 4) ttrow( ) (EOF) ( 2 1 0) (shift `expr') ttrow( ) (start EOF) ( 2 1 0) (reduction by rule 1) ttrow( ) (EOF) ( 1 0) (shift `start') ttrow( ) (EOF) ( 1 0) (ACCEPT) rowline() ) bisonc++-4.13.01/documentation/manual/algorithm/example/0000755000175000017500000000000012633316117022021 5ustar frankfrankbisonc++-4.13.01/documentation/manual/algorithm/example/grammar.output0000644000175000017500000000444012633316117024733 0ustar frankfrank Production Rules: 1: start -> start expr 2: start -> 3: expr -> NR 4: expr -> expr '+' expr 5: start_$ -> start Symbolic Terminal tokens: error EOF 257: NR 43: '+' FIRST sets: start: { NR } expr: { NR } start_$: { NR } FOLLOW sets: start: { NR } expr: { NR '+' } start_$: { } Grammar States: For each state information like the following is shown for its items: 0: [P1 1] S -> C . C { } 0 which should be read as follows: 0: The item's index [P1 1]: The rule (production) number and current dot-position S -> C . C: The item (lhs -> Recognized-symbols . symbols-to-recognize) { } The item's lookahead (LA) set 0 The next-element (shown below the items) describing the action associated with this item (-1 for reducible items) The Next tables show entries like: 0: On C to state 5 with (0 ) meaning: 0: The Next table's index On C to state 5: When C was recognized, continue at state 5 with (0 ) The item(s) whose dot is shifted at the next state Also, reduction item(s) may be listed State 0: 0: [P5 0] start_$ -> . start { } 0 1: [P1 0] start -> . start expr { NR } 0 2: [P2 0] start -> . { NR } 1, () -1 0: On start to state 1 with (0 1 ) Reduce item(s): 2 State 1: 0: [P5 1] start_$ -> start . { } -1 1: [P1 1] start -> start . expr { NR } 0 2: [P3 0] expr -> . NR { NR '+' } 1 3: [P4 0] expr -> . expr '+' expr { NR '+' } 0 0: On expr to state 2 with (1 3 ) 1: On NR to state 3 with (2 ) State 2: 0: [P1 2] start -> start expr . { NR } -1 1: [P4 1] expr -> expr . '+' expr { NR '+' } 0 0: On '+' to state 4 with (1 ) Reduce item(s): 0 State 3: 0: [P3 1] expr -> NR . { NR '+' } -1 Reduce item(s): 0 State 4: 0: [P4 2] expr -> expr '+' . expr { NR '+' } 0 1: [P3 0] expr -> . NR { NR '+' } 1 2: [P4 0] expr -> . expr '+' expr { NR '+' } 0 0: On expr to state 5 with (0 2 ) 1: On NR to state 3 with (1 ) State 5: 0: [P4 3] expr -> expr '+' expr . { NR '+' } -1 1: [P4 1] expr -> expr . '+' expr { NR '+' } 0 0: Reduce item(s): 0 bisonc++-4.13.01/documentation/manual/algorithm/example/grammar0000644000175000017500000000014512633316117023372 0ustar frankfrank%token NR %left '+' %% start: start expr | // empty ; expr: NR | expr '+' expr ; bisonc++-4.13.01/documentation/manual/algorithm/example/Parser.h0000644000175000017500000000202512633316117023425 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "Parserbase.h" #include #include #undef Parser class Parser: public ParserBase { public: int parse(); private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); }; inline void Parser::error(char const *msg) { std::cerr << msg << '\n'; } // $insert lex inline void Parser::print() // use d_token, d_loc {} inline int Parser::lex() { std::string word; std::cin >> word; if (std::cin.eof()) return 0; if (isdigit(word[0])) return NR; return word[0]; } #endif bisonc++-4.13.01/documentation/manual/algorithm/example/demo.cc0000644000175000017500000000011412633316117023250 0ustar frankfrank#include "Parser.h" int main() { Parser parser; parser.parse(); } bisonc++-4.13.01/documentation/manual/algorithm/lookahead.yo0000644000175000017500000000440412633316117022670 0ustar frankfrankThe b() parser does em(not) always perform a reduction when a state is reached where an item has its dot position beyond the last element of its production rule. For most languages such a simple strategy is incorrect. Instead, when a reduction is possible, the parser sometimes `looks ahead' to the next token to decide what to do. Whenever a token is read, it is not immediately shifted; first it becomes the em(look-ahead) token, which is not yet shifted on the stack. This allows the parser to perform one or more reductions, with the look-ahead token still waiting to be processed. Only when all available reductions have been performed the look-ahead token is shifted on the stack. The phrase `all em(available) reductions' does not necessarily mean all em(possible) reductions. Depending on the look-ahead token, a shift rather than a reduce may be performed in states in which both actions are possible. Here is a simple case where a look-ahead token is required. The production rules define expressions which may contain binary addition operators and postfix unary factorial operators (`tt(!)'), as well as parentheses for grouping expressions: verb( expr: term '+' expr | term ; term: '(' expr ')' | term '!' | NUMBER ; ) Suppose that the tokens `tt(1 + 2)' have been read and shifted; what should be done? If the following token is `CLOSEPAR', then the first three tokens must be reduced, forming an tt(expr). This is the only valid course, because shifting the `CLOSEPAR' would produce the sequence of symbols verb( term 'CLOSEPAR' ) which is not syntactically correct. But if the next token is `tt(!)', then that token must be shifted so that `tt(2 !)' can be reduced to recognize a tt(term). If in this case the parser would perform a reduction then `tt(1 + 2)' would become an tt(expr). In that case the `tt(!)' can't be shifted because doing so would result in the sequence verb( expr '!' ) which is also syntactically incorrect. COMMENT( As a technical aside: the current look-ahead token is stored in the parser's private data member tt(d_token). This data member is not normally modified by user-defined member functions. See section ref(SPECIAL). END) bisonc++-4.13.01/documentation/manual/directives/0000755000175000017500000000000012633316117020541 5ustar frankfrankbisonc++-4.13.01/documentation/manual/directives/polymorphic.yo0000644000175000017500000000170612633316117023463 0ustar frankfrank Syntax: bf(%polymorphic) tt(polymorphic-specification(s)) The tt(%polymorphic) directive is used by bic() to define a polymorphic semantic value class, which can be used as a (preferred) alternative to (traditional) tt(union) types. Refer to section ref(POLYMORPHIC) for a detailed description of the specification, characteristics, and use of polymorphic semantic values as defined by bic(). As a quick reference: to define multiple semantic values using a polymorphic semantic value class offering either an tt(int), a tt(std::string) or a tt(std::vector) specify: verb( %polymorphic INT: int; STRING: std::string; VECT: std::vector ) and use tt(%type) specifications (cf. section ref(TYPE)) to associate (non-)terminals with specific semantic values. bisonc++-4.13.01/documentation/manual/directives/scanner.yo0000644000175000017500000000142212633316117022542 0ustar frankfrankSyntax: bf(%scanner) tt(header)nl() Use tt(header) as the pathname of a file to include in the parser's class header. See the description of the link(--scanner)(SCANOPT) option for details about this option. This directive also implies the automatic definition of a composed tt(Scanner d_scanner) data member into the generated parser, as well as a predefined bf(int lex()) member, returning tt(d_scanner.lex()). By specifying the tt(%flex) directive the function tt(d_scanner.YYText()) is called. The specfied tt(header) file will be surrounded by double quotes if no delimiters were provided. If pointed brackets (tt(<...>)) are used, they are kept. It is an error if this directive is used and an already existing parser-class header file does not include `tt(header)'. bisonc++-4.13.01/documentation/manual/directives/scannerclassname.yo0000644000175000017500000000100012633316117024421 0ustar frankfrankSyntax: bf(%scanner-class-name) tt(scannerClassName) nl() Defines the name of the scanner class, declared by the tt(pathname) header file that is specified at the tt(scanner) option or directive. By default the class name ttt(Scanner) is used. It is an error if this directive is used and either the tt(scanner) directive was not provided, or the parser class interface in an already existing parser class header file does not declare a scanner class tt(d_scanner) object. bisonc++-4.13.01/documentation/manual/directives/negative.yo0000644000175000017500000000112412633316117022712 0ustar frankfrankSyntax: bf(%negative-dollar-indices) nl() Accept (do not generate warnings) zero- or negative dollar-indices in the grammar's action blocks. Zero or negative dollar-indices are commonly used to implement inherited attributes and should normally be avoided. When used they can be specified like tt($-1), or like tt($-1), where tt(type) is empty; an tt(STYPE__) tag; or a tt(%union) field-name. See also the sections ref(ACTIONS) and ref(SPECIAL). In combination with the tt(%polymorphic) directive (see below) only the tt($-i) format can be used (see also section ref(POLYMORPHIC)). bisonc++-4.13.01/documentation/manual/directives/scannermatchedtextfunction.yo0000644000175000017500000000144212633316117026545 0ustar frankfrankSyntax: bf(%scanner-matched-text-function) tt(function-call) nl() The tt(%scanner-matched-text-function) directive defines the scanner function returning the text matching the previously returned token. By default this is tt(d_scanner.matched()). A complete function call expression should be provided (including a scanner object, if used). This option overrules the tt(d_scanner.matched()) call used by default when the tt(%scanner) directive is specified. Example: verb( %scanner-matched-text-function myScanner.matchedText() ) If the function call expression contains white space then the tt(function-call) specification should be surrounded by double quotes (tt(")). This directive is overruled by the bf(--scanner-matched-text-function) command-line option. bisonc++-4.13.01/documentation/manual/directives/targetdir.yo0000644000175000017500000000046412633316117023103 0ustar frankfrankSyntax: bf(%target-directory) tt(pathname) nl() tt(Pathname) defines the directory where generated files should be written. By default this is the directory where bic() is called. This directive is overruled by the tt(--target-directory) command-line option. bisonc++-4.13.01/documentation/manual/directives/include.yo0000644000175000017500000000235112633316117022536 0ustar frankfrankSyntax: bf(%include) tt(pathname)nl() This directive is used to switch to tt(pathname) while processing a grammar specification. Unless tt(pathname) defines an absolute file-path, tt(pathname) is searched relative to the location of bic()'s main grammar specification file (i.e., the grammar file that was specified as bic()'s command-line option). This directive can be used to split long grammar specification files in shorter, meaningful units. After processing tt(pathname) processing continues beyond the tt(%include pathname) directive. Bic()'s main grammar specification file could be: verb( %include spec/declarations.gr %% %include spec/rules.gr ) where tt(spec/declarations.gr) contains declarations and tt(spec/rules.gr) contains the rules. Each of the files included using tt(%include) may itself use tt(%include) directives (which are then processed relative to their locations). The default nesting limit for tt(%include) directives is 10, but the option link(--max-inclusion-depth)(MAXDEPTH) can be used to change this default. tt(%include) directives should be specified on a line of their own. bisonc++-4.13.01/documentation/manual/directives/start.yo0000644000175000017500000000073412633316117022253 0ustar frankfrankSyntax: bf(%start) tt(nonterminal symbol) By default b() uses the the LHS of the first rule in a grammar specification file as the start symbol. I.e., the parser tries to recognize that nonterminal when parsing input. This default behavior may be overriden using the tt(%start) directive. The nonterminal symbol specifies a LHS that may be defined anywhere in the rules section of the grammar specification file. This LHS becomes the grammar's start symbol. bisonc++-4.13.01/documentation/manual/directives/parse.yo0000644000175000017500000000045712633316117022232 0ustar frankfrankSyntax: bf(%parsefun-source) tt(filename) nl() tt(Filename) defines the name of the source file to contain the parser member function tt(parse). Defaults to tt(parse.cc). This directive is overruled by the bf(--parse-source) (bf(-p)) command-line option. bisonc++-4.13.01/documentation/manual/directives/print.yo0000644000175000017500000000056012633316117022247 0ustar frankfrankSyntax: bf(%print-tokens) nl() The tt(%print-tokens) directive provides an implementation of the Parser class's tt(print__) function displaying the current token value and the text matched by the lexical scanner as received by the generated tt(parse) function. The tt(print__) function is also implemented if the tt(--print) command-line option is provided. bisonc++-4.13.01/documentation/manual/directives/expect.yo0000644000175000017500000000263012633316117022403 0ustar frankfrankSyntax: bf(%expect) tt(number) nl() B() normally warns if there are any conflicts in the grammar (see section ref(SHIFTREDUCE)), but many real grammars have harmless em(shift/reduce conflicts) which are resolved in a predictable way and would be difficult to eliminate. It is desirable to suppress the warning about these conflicts unless the number of conflicts changes. You can do this with the tt(%expect) declaration. The argument tt(number) is a decimal integer. The declaration says there should be no warning if there are tt(number) shift/reduce conflicts and no em(reduce/reduce conflicts). The usual warning is given if there are either em(more) or em(fewer) conflicts, em(or) if there are em(any) reduce/reduce conflicts. In general, using tt(%expect) involves these steps: itemization( it() Compile your grammar without tt(%expect). Use the `tt(-V)' option to get a verbose list of where the conflicts occur. B() will also print the number of conflicts. it() Check each of the conflicts to make sure that b()'s default resolution is what you really want. If not, rewrite the grammar and go back to the beginning. it() Add an tt(%expect) declaration, copying the number of (shift-reduce) conflict printed by b(). ) Now b() will stop annoying you about the conflicts you have checked, but it will warn you again if changes in the grammar result in another number or type of conflicts. bisonc++-4.13.01/documentation/manual/directives/precedence.yo0000644000175000017500000000360712633316117023215 0ustar frankfrankSyntax: quote( bf(%left) [ ] terminal(s) nl() bf(%nonassoc) [ ] terminal(s) nl() bf(%right) [ ] terminal(s) ) These directives are called em(precedence directives) (see also section ref(PRECEDENCE) for general information on operator precedence). The tt(%left), tt(%right) or tt(%nonassoc) directive are used to declare tokens and to specify their precedence and associativity, all at once. itemization( it() The em(associativity) of an operator tt(op) determines how repeated uses of the operator em(nest): whether `tt(x op y op z)' is parsed by grouping tt(x) with tt(y) first or by grouping tt(y) with tt(z) first. tt(%left) specifies em(left-associativity) (grouping tt(x) with tt(y) first) and tt(%right) specifies em(right-associativity) (grouping tt(y) with tt(z) first). tt(%nonassoc) specifies em(no) associativity, which means that `tt(x op y op z)' is not a defined operation, and could be considered an error. it() The precedence of an operator determines how it nests with other operators. All the tokens declared in a single precedence directive have equal precedence and nest together according to their associativity. When two tokens declared in different precedence directives associate, the one declared em(later) has the higher precedence and is grouped em(first). ) The tt() specification is optional, and specifies the type of the semantic value when a token specified to the right of a tt() specification is received. The pointed arrows are part of the type specification; the type itself must be a field of a tt(%union) specification (see section ref(UNION)). When multiple tokens are listed they must be separated by whitespace or by commas. Note that the precedence directives also serve to define token names: symbolic tokens mentioned with these directives should not be defined using tt(%token) directives. bisonc++-4.13.01/documentation/manual/directives/required.yo0000644000175000017500000000232612633316117022735 0ustar frankfrankSyntax: bf(%required-tokens) tt(ntokens) nl() Whenever a syntactic error is detected during the parsing process the next few tokens that are received by the parsing function may easily cause yet another (spurious) syntactic error. In this situation error recovery in fact produces an avalanche of additional errors. If this happens the recovery process may benefit from a slight modification. Rather than reporting every syntactic error encountered by the parsing function, the parsing function may wait for a series of successfully processed tokens before reporting the next error. The directive tt(%required-tokens) can be used to specify this number. E.g., the specification tt(%required-tokens 10) requires the parsing function to process successfully a series of 10 tokens before another syntactic error is reported (and counted). If a syntactic error is encountered before processing 10 tokens then the counter counting the number of successfully processed tokens is reset to zero, no error is reported, but the error recoery procedure continues as usual. The number of required tokens can also be set using the option link(--required-tokens)(REQUIRED). By default the number of required tokens is initialized to 0. bisonc++-4.13.01/documentation/manual/directives/nonterms.yo0000644000175000017500000000236412633316117022764 0ustar frankfrankSyntax: bf(%type) tt( symbol(s))nl() When tt(%polymorphic) is used to specify multiple semantic value types, (non-)terminals can be associated with one of the semantic value types specified with the tt(%polymorphic) directive. When tt(%union) is used to specify multiple semantic value types, (non-)terminals can be associated with one of the tt(union) fields specified with the tt(%union) directive. To associate (non-)terminals with specific semantic value types the bf(%type) directive is used. With this directive, tt(symbol(s)) represents of one or more blank or comma delimited grammatical symbols (i.e., terminal and/or nonterminal symbols); tt(type) is either a polymorphic type-identifier or a field name defined in the tt(%union) specification. The specified nonterminal(s) are automatically associated with the indicate semantic type. The pointed arrows are part of the type specification. When the semantic value type of a terminal symbol is defined the em(lexical scanner) rather than the parser's actions must assign the appropriate semantic value to link(d_val__)(DVAL) just prior to returning the token. To associate terminal symbols with semantic values, terminal symbols can also be specified in a tt(%type) directive. bisonc++-4.13.01/documentation/manual/directives/intro.yo0000644000175000017500000000523012633316117022245 0ustar frankfrankThe b() declarations section of a b() grammar defines the symbols used in formulating the grammar and the data types of semantic values. See section ref(SYMBOLS). All token type names (but not single-character literal tokens such as '+' and '*') must be declared. If you need to specify which data type to use for the semantic value (see section ref(MORETYPES)) of nonterminal symbols, these symbols must be declared as well. The first rule in the file by default specifies the em(start symbol). If you want some other symbol to be the start symbol, you must use an explicit tt(%start) directive (see section ref(LANGUAGES)). In this section all of b()'s declarations are discussed. Some of the declarations have already been mentioned, but several more are available. Some declarations define how the grammar parses its input (like tt(%left, %right)); other declarations are available, defining, e.g., the name of the parsing function (by default tt(parse())), or the name(s) of the files generated by b(). In particular readers familiar with Bison (or Bison++) should read this section thoroughly, since b()'s directives are more extensive and different from the `declarations' offered by Bison, and the macros offered by Bison++. Several directives expect file- or path-name arguments. File- or path-names must be specified on the same line as the directive itself, and they start at the first non-blank character following the directive. File- or path-names may contain escape sequences (e.g., if you must: use `tt(\ )' to include a blank in a filename) and continue until the first blank character thereafter. Alternatively, file- or path-names may be surrounded by double quotes (tt("...")) or pointed brackets (tt(<...>)). Pointed brackets surrounding file- or path-names merely function to delimit filenames. They do not refer to, e.g., bf(C++)'s include path. No escape sequences are required for blanks within delimited file- or path-names. Directives accepting a `filename' do not accept path names, i.e., they cannot contain directory separators (tt(/)); options accepting a 'pathname' may contain directory separators. Sometimes directives have analogous command-line options. In those cases command-line options take priority over directives. Some directives may generate errors. This happens when an directive conflicts with the contents of a file which bic() cannot modify (e.g., a parser class header file exists, but doesn't define a name space, but a tt(%namespace) directive was provided). To solve such errore the offending directive could be omitted, the existing file could be removed, or the existing file could be hand-edited according to the directive's specification. bisonc++-4.13.01/documentation/manual/directives/lneeded.yo0000644000175000017500000000133012633316117022507 0ustar frankfrankSyntax: bf(%lsp-needed) nl() Defining this causes b() to include code into the generated parser using the standard location stack. The token-location type defaults to the following struct, defined in the parser's base class when this directive is specified: verb( struct LTYPE__ { int timestamp; int first_line; int first_column; int last_line; int last_column; char *text; }; ) Note that defining this struct type does not imply that its field are also assigned. Some form of communication with the lexical scanner is probably required to initialize the fields of this struct properly. bisonc++-4.13.01/documentation/manual/directives/stype.yo0000644000175000017500000000126012633316117022255 0ustar frankfrankSyntax: bf(%stype typename) nl() The type of the semantic value of tokens. The specification tt(typename) should be the name of an unstructured type (e.g., bf(size_t)). By default it is bf(int). See bf(YYSTYPE) in bf(bison). It should not be used if a bf(%union) specification is used. Within the parser class, this type may be used as bf(STYPE__). Any text following tt(%stype) up to the end of the line, up to the first of a series of trailing blanks or tabs or up to a comment-token (tt(//) or tt(/*)) becomes part of the type definition. Be sure em(not) to end a tt(%stype) definition in a semicolon. bisonc++-4.13.01/documentation/manual/directives/parserclass.yo0000644000175000017500000000156412633316117023442 0ustar frankfrankSyntax: bf(%class-name) tt(parser-class-name) nl() By default, b() generates a parser-class by the name of tt(Parser). The default can be changed using this directive which defines the name of the bf(C++) class that will be generated. It may be defined only once and tt(parser-class-name) must be a bf(C++) identifier. If you're familiar with the Bison++ program, please note: itemization( it() This directive replaces the bf(%name) directive previously used by Bison++. it() Contrary to Bison++'s bf(%name) directive, bf(%class-name) may appear anywhere in the directive section of the grammar specification file. ) It is an error if this directive is used and an already existing parser-class header file does not define tt(class `className') and/or if an already existing implementation header file does not define members of the class tt(`className'). bisonc++-4.13.01/documentation/manual/directives/improper.yo0000644000175000017500000000150612633316117022751 0ustar frankfrankSeveral identifiers cannot be used as token names as their use would collide with identifiers that are defined in the parser's base class. In particular, itemization( it() no token should end in two underscores (tt(__)). it() some identifiers are reserved and cannot be used as tokens. They are: verb( ABORT, ACCEPT, ERROR, clearin, debug, error, setDebug ) Except for tt(error), which is a predefined terminal token, these identifiers are the names of functions traditionally defined by b(). The restriction on the above identifers could be lifted, but then the resulting generated parser would no longer be backward compatible with versions before B() 2.0.0. It appears that imposing minimal restrictions on the names of tokens is a small penalty to pay for keeping backward compatibility. ) bisonc++-4.13.01/documentation/manual/directives/classhdr.yo0000644000175000017500000000105212633316117022713 0ustar frankfrank Syntax: bf(%class-header) tt(filename) nl() tt(Filename) defines the name of the file to contain the parser class. Defaults to the name of the parser class plus the suffix tt(.h) This directive is overruled by the bf(--class-header) (bf(-c)) command-line option. It is an error if this directive is used and an already existing parser-class header file does not define tt(class `className') and/or if an already existing implementation header file does not define members of the class tt(`className'). bisonc++-4.13.01/documentation/manual/directives/ltype.yo0000644000175000017500000000113512633316117022247 0ustar frankfrank bf(%ltype typename) nl() Specifies a user-defined token location type. If bf(%ltype) is used, tt(typename) should be the name of an alternate (predefined) type (e.g., bf(size_t)). It should not be used together with a link(%locationstruct)(LOCSTRUCT) specification. From within the parser class, this type may be used as bf(LTYPE__). Any text following tt(%ltype) up to the end of the line, up to the first of a series of trailing blanks or tabs or up to a comment-token (tt(//) or tt(/*)) becomes part of the type definition. Be sure em(not) to end a tt(%ltype) definition in a semicolon. bisonc++-4.13.01/documentation/manual/directives/imphdr.yo0000644000175000017500000000167512633316117022406 0ustar frankfrankSyntax: bf(%implementation-header) tt(filename) nl() tt(Filename) defines the name of the file to contain the implementation header. It defaults to the name of the generated parser class plus the suffix tt(.ih). nl() The implementation header should contain all directives and declarations em(only) used by the implementations of the parser's member functions. It is the only header file that is included by the source file containing tt(parse)'s implementation. User defined implementation of other class members may use the same convention, thus concentrating all directives and declarations that are required for the compilation of other source files belonging to the parser class in one header file.nl() This directive is overruled by the bf(--implementation-header) (bf(-i)) command-line option. bisonc++-4.13.01/documentation/manual/directives/namespace.yo0000644000175000017500000000113612633316117023047 0ustar frankfrankSyntax: bf(%namespace) tt(namespace) nl() Defines all of the code generated by bic() in the namespace tt(namespace). By default no namespace is defined. If this options is used the implementation header will contain a commented out tt(using namespace) directive for the requested namespace. In addition, the parser and parser base class header files also use the specified namespace to define their include guard directives. It is an error if this directive is used and an already existing parser-class header file and/or implementation header file does not define tt(namespace identifier). bisonc++-4.13.01/documentation/manual/directives/errorverbose.yo0000644000175000017500000000056412633316117023636 0ustar frankfrankSyntax: bf(%error-verbose) nl() The parser's state stack is dumped to the standard error stream when an error is detected by the tt(parse()) member function. Following a call of the tt(error()) function, the stack is dumped from the top of the stack (highest offset) down to its bottom (offset 0). Each stack element is prefixed by the stack element's index. bisonc++-4.13.01/documentation/manual/directives/output.yo0000644000175000017500000000155712633316117022462 0ustar frankfrankUnless otherwise specified, b() uses the name of the parser-class to derive the names of most of the files it may generate. Below tt() should be interpreted as the name of the parser's class, tt(Parser) by default, but configurable using tt(%class-name) (see section ref(PARSERCLASS)). itemization( it() The parser's base class header: tt(base.h), configurable using tt(%baseclass-header) (see section ref(BCHEADER)) or tt(%filenames) (see section ref(FILES)); it() The parser's class header: tt(.h), configurable using tt(%class-header) (see section ref(CHEADER)) or tt(%filenames) (see section ref(FILES)); it() The parser's implementation header file: tt(.ih), configurable using tt(%implementation-header) (see section ref(IHEADER)) or tt(%filenames) (see section ref(FILES)); ) bisonc++-4.13.01/documentation/manual/directives/baseclass.yo0000644000175000017500000000130612633316117023052 0ustar frankfrank Syntax: bf(%baseclass-header) tt(filename) nl() tt(Filename) defines the name of the file to contain the parser's base class. This class defines, e.g., the parser's symbolic tokens. Defaults to the name of the parser class plus the suffix tt(base.h). It is always generated, unless (re)writing is suppressed by the tt(--no-baseclass-header) and tt(--dont-rewrite-baseclass-header) options. This directive is overruled by the bf(--baseclass-header) (bf(-b)) command-line option. It is an error if this directive is used and an already existing parser class header file does not contain tt(#include "filename"). bisonc++-4.13.01/documentation/manual/directives/preinclude.yo0000644000175000017500000000101412633316117023240 0ustar frankfrank Syntax: bf(%baseclass-preinclude) tt(pathname)nl() tt(Pathname) defines the path to the file preincluded in the parser's base-class header. See the description of the link(--baseclass-preinclude)(PREINCLUDE) option for details about this directive. By default `filename' is surrounded by double quotes; it's OK, however, to provide them yourself. When the argument is surrounded by em(pointed brackets) tt(#include
) is used. bisonc++-4.13.01/documentation/manual/directives/debug.yo0000644000175000017500000000104012633316117022173 0ustar frankfrankSyntax: bf(%debug) nl() Provide tt(parse()) and its support functions with debugging code, showing the actual parsing process on the standard output stream. When included, the debugging output is active by default, but its activity may be controlled using the bf(setDebug(bool on-off)) member. Note that no tt(#ifdef DEBUG) macros are used anymore. Rerun b() without the bf(--debug) option to generate an equivalent parser not containing the debugging code. bisonc++-4.13.01/documentation/manual/directives/scannertokenfunction.yo0000644000175000017500000000124712633316117025356 0ustar frankfrankSyntax: bf(%scanner-token-function) tt(function-call) nl() The scanner function returning the next token, called from the generated parser's tt(lex) function. A complete function call expression should be provided (including a scanner object, if used). Example: verb( %scanner-token-function d_scanner.lex() ) If the function call contains white space tt(scanner-token-function) should be surrounded by double quotes. It is an error if this directive is used and the specified scanner token function is not called from the code in an already existing implementation header. bisonc++-4.13.01/documentation/manual/directives/tokens.yo0000644000175000017500000000322312633316117022415 0ustar frankfrankSyntax: quote( bf(%token) tt(terminal token(s)) nl() bf(%token) [ ] tt(terminal token(s)) ) The bf(%token) directive is used to define one or more symbolic terminal tokens. When multiple tokens are listed they must be separated by whitespace or by commas. The tt() specification is optional, and specifies the type of the semantic value when a token specified to the right of a tt() specification is received. The pointed arrows are part of the type specification; the type itself must be a field of a tt(%union) specification (see section ref(UNION)). b() converts symbolic tokens (including those defined by the precedence directives (cf. section ref(PRECEDENCE))) into tt(Parser::Tokens) enumeration values (where `tt(Parser)' is the name of the generated parser class, see section ref(PARSERCLASS)). This allows the lexical scanner member function tt(lex()) to return these token values by name directly, and it allows externally defined lexical scanners (called by tt(lex())) to return token values as tt(Parser::name). When an externally defined lexical scanner is used, it should include tt(Parserbase.h), the parser's base class header file, in order to be able to use the tt(Parser::Tokens) enumeration type. Although it em(is) possible to specify explicitly the numeric code for a token type by appending an integer value in the field immediately following the token name (as in tt(%token NUM 300)) this practice is deprecated. It is generally best to let b() choose the numeric values for all token types. B() automatically selects values that don't conflict with each other or with ASCII character values. bisonc++-4.13.01/documentation/manual/directives/union.yo0000644000175000017500000000250412633316117022243 0ustar frankfrank Syntax: bf(%union) tt(union-definition body)nl() The tt(%union) directive specifies the entire collection of possible data types for semantic values. The keyword tt(%union) is followed by a pair of braces containing the same thing that goes inside a union in bf(C++). For example: verb( %union { double u_val; symrec *u_tptr; }; ) This says that the two alternative types are tt(double) and tt(symrec *). They are given names tt(u_val) and tt(u_tptr); these names are used in the tt(%token) and tt(%type) directives to pick one of the types for a terminal or nonterminal symbol (see section ref(TYPE)). Notes: itemization( it() The semicolon following the closing brace is em(optional). it() bf(C++-11) does allow class types to be used in tt(union) definitions, so they can also be used in tt(%union) directives. When a class type variant is required, all required constructors, the destructor and other members (like overloaded assignment operators) must be able to handle the actual class type data fields properly. A discussion of how to use unrestricted unions is beyon this manual's scope, but can be found, e.g., in the url(C++ Annotations)(http://cppannotations.sf.net). See also section ref(MORETYPES). ) bisonc++-4.13.01/documentation/manual/directives/lines.yo0000644000175000017500000000102612633316117022223 0ustar frankfrankSyntax: bf(%no-lines) nl() Do not put bf(#line) preprocessor directives in the file containing the parser's tt(parse()) function. By default tt(#line) preprocessor directives are inserted just before action blocks in the generated tt(parse.cc) file. The tt(#line) directives allow compilers and debuggers to associate errors with lines in your grammar specification file, rather than with the source file containing the tt(parse) function itself. bisonc++-4.13.01/documentation/manual/directives/locstruct.yo0000644000175000017500000000073512633316117023141 0ustar frankfrankSyntax: bf(%locationstruct) tt(struct-definition) nl() Defines the organization of the location-struct data type bf(LTYPE__). This struct should be specified analogously to the way the parser's stacktype is defined using bf(%union) (see below). The location struct type is named bf(LTYPE__). If neither bf(locationstruct) nor bf(LTYPE__) is specified, the default link(LTYPE__)(LSPNEEDED) struct is used. bisonc++-4.13.01/documentation/manual/directives/filenames.yo0000644000175000017500000000061612633316117023060 0ustar frankfrankSyntax: bf(%filenames) tt(filename) nl() tt(Filename) is a generic filename that is used for all header files generated by bic(). Options defining specific filenames are also available (which then, in turn, overrule the name specified by this option). This directive is overruled by the bf(--filenames) (bf(-f)) command-line option. bisonc++-4.13.01/documentation/manual/directives/flex.yo0000644000175000017500000000047312633316117022054 0ustar frankfrankSyntax: bf(%flex) nl() When provided, the scanner matched text function is called as tt(d_scanner.YYText()), and the scanner token function is called as tt(d_scanner.yylex()). This directive is only interpreted if the tt(%scanner) directive is also provided. bisonc++-4.13.01/documentation/manual/directives/weaktags.yo0000644000175000017500000000213012633316117022714 0ustar frankfrankBy default, the tt(%polymorphic) directive declares a strongly typed enum: tt(enum class Tag__), and code generated by bic() always uses the tt(Tag__) scope when referring to tag identifiers. It is often possible (by pre-associating tokens with tags, using tt(%type) directives) to avoid having to use tags in user-code. If tags em(are) explicitly used, then they must be prefixed with the tt(Tag__) scope. Before the arrival of the C++-11 standard strongly typed enumerations didn't exist, and explicit enum-type scope prefixes were usually omitted. This works fine, as long as there are no name-conflicts: parser-tokens, other enumerations or variables could use identifiers also used by tt(enum Tag__). This results in compilation errors that can simply be prevented using strongly typed enumerations. The tt(%weak-tags) directive can be specified when the tt(Tag__) enum should em(not) be declared as a strongly typed enum. When in doubt, don't use this directive and stick to using the strongly typed tt(Tag__) enum. When using tt(%weak-tags) be prepared for compilation errors caused by name collisions. bisonc++-4.13.01/documentation/manual/directives/prec.yo0000644000175000017500000000223712633316117022047 0ustar frankfrankSyntax: bf(%prec) tt(token) nl() The construction bf(%prec) tt(token) may be used in production rules to overrule the actual precendence of an operator in a particular production rule. Well known is the construction verb( expression: '-' expression %prec UMINUS { ... } ) Here, the default priority and precedence of the tt(`-') token as the subtraction operator is overruled by the precedence and priority of the tt(UMINUS) token, which is frequently defined as: verb( %right UMINUS ) E.g., a list of arithmetic operators could consists of: verb( %left '+' '-' %left '*' '/' '%' %right UMINUS ) giving tt('*' '/') and tt('%') a higher priority than tt('+') and tt('-'), ensuring at the same time that tt(UMINUS) is given both the highest priority and a right-associativity. In the above production rule the operator order would cause the construction verb( '-' expression ) to be evaluated from right to left, having a higher precedence than either the multiplication or the addition operators. bisonc++-4.13.01/documentation/manual/invoking.yo0000644000175000017500000000016712633316117020601 0ustar frankfranklsect(OPTIONS)(Bisonc++ options) includefile(invoking/options.yo) sect(Bisonc++ usage) includefile(invoking/usage.yo) bisonc++-4.13.01/documentation/manual/error.yo0000644000175000017500000000034412633316117020103 0ustar frankfrankincludefile(error/intro.yo) sect(Syntactical Error Recovery) includefile(error/syntactical.yo) subsect(Error Recovery) includefile(error/recovery.yo) sect(Semantical Error Recovery) includefile(error/semantical.yo) bisonc++-4.13.01/documentation/manual/conditions/0000755000175000017500000000000012633316117020551 5ustar frankfrankbisonc++-4.13.01/documentation/manual/conditions/gpl.yo0000644000175000017500000000040712633316117021705 0ustar frankfrankThe text of the em(GNU General Public License) (GPL) is frequently found in files named tt(COPYING). On em(Debian) systems the GPL may be found in the file tt(/usr/share/common-licenses/GPL). The GPL is shown below: verbinclude(/usr/share/common-licenses/GPL) bisonc++-4.13.01/documentation/manual/conditions/intro.yo0000644000175000017500000000031012633316117022247 0ustar frankfrankB() may be used according to the em(GNU General Public License) (GPL). In short, this implies that everybody is allowed to use tt(b()) and its generated software in any program he/she is developing. bisonc++-4.13.01/documentation/manual/version.yo0000644000175000017500000000005712633316117020440 0ustar frankfrankSUBST(DOCVERSION)(0.98.003) SUBST(YEARS)(2005) bisonc++-4.13.01/documentation/manual/invoking/0000755000175000017500000000000012634776615020242 5ustar frankfrankbisonc++-4.13.01/documentation/manual/invoking/options.yo0000644000175000017500000005163612633316117022303 0ustar frankfrank Where available, single letter options are listed between parentheses beyond their associated long-option variants. Single letter options require arguments if their associated long options require arguments. Options affecting the class header or implementation header file are ignored if these files already exist. Options accepting a `filename' do not accept path names, i.e., they cannot contain directory separators (tt(/)); options accepting a 'pathname' may contain directory separators. Some options may generate errors. This happens when an option conflicts with the contents of a file which bic() cannot modify (e.g., a parser class header file exists, but doesn't define a name space, but a tt(--namespace) option was provided). To solve the error the offending option could be omitted, the existing file could be removed, or the existing file could be hand-edited according to the option's specification. Note that bic() currently does not handle the opposite error condition: if a previously used option is omitted, then bic() does not detect the inconsistency. In those cases compilation errors may be generated. COMMENT( class header: warn for class-name mismatch warn for not including baseclass-header warn for namespace mismatch if the 'scanner' option was provided: warn for missing "scanner" include-spec warn for missing Scanner d_scanner spec. implementation header: warn for class-name mismatch (in inline defined members) warn for not including the class header warn for namespace mismatch warn for a mismatch in the scanner-token-function name END) itemization( it() loption(analyze-only) (soption(A))nl() Only analyze the grammar. No files are (re)written. This option can be used to test the grammatic correctness of modification `in situ', without overwriting previously generated files. If the grammar contains syntactic errors only syntax analysis is performed. it() lsoption(baseclass-header)(b)(filename)nl() tt(Filename) defines the name of the file to contain the parser's base class. This class defines, e.g., the parser's symbolic tokens. Defaults to the name of the parser class plus the suffix tt(base.h). It is generated, unless otherwise indicated (see tt(--no-baseclass-header) and tt(--dont-rewrite-baseclass-header) below). It is an error if this option is used and an already existing parser class header file does not contain tt(#include "filename"). it() label(PREINCLUDE) lsoption(baseclass-preinclude)(H)(pathname)nl() tt(Pathname) defines the path to the file preincluded in the parser's base-class header. This option is needed in situations where the base class header file refers to types which might not yet be known. E.g., with polymorphic semantic values a tt(std::string) value type might be used. Since the tt(string) header file is not by default included in tt(parserbase.h) we somehow need to inform the compiler about this and possibly other headers. The suggested procedure is to use a pre-include header file declaring the required types. By default `tt(header)' is surrounded by double quotes: tt(#include "header") is used when the option tt(-H header) is specified. When the argument is surrounded by pointed brackets tt(#include
) is included. In the latter case, quotes might be required to escape interpretation by the shell (e.g., using tt(-H '
')). it() lsoption(baseclass-skeleton)(B)(pathname)nl() tt(Pathname) defines the path name to the file containing the skeleton of the parser's base class. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++base.h)). it() lsoption(class-header)(c)(filename)nl() tt(Filename) defines the name of the file to contain the parser class. Defaults to the name of the parser class plus the suffix tt(.h) It is an error if this option is used and an already existing implementation header file does not contain tt(#include "filename"). it() loption(class-name) tt(className) nl() Defines the name of the bf(C++) class that is generated. If neither this option, nor the tt(%class-name) directory is specified, then the default class name (tt(Parser)) is used. It is an error if this option is used and an already existing parser-class header file does not define tt(class `className') and/or if an already existing implementation header file does not define members of the class tt(`className'). it() lsoption(class-skeleton)(C)(pathname)nl() tt(Pathname) defines the path name to the file containing the skeleton of the parser class. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++.h)). it() loption(construction)nl() Details about the construction of the parsing tables are written to the same file as written by the tt(--verbose) option (i.e., tt(.output), where tt() is the input file read by bic(). This information is primarily useful for developers. It augments the information written to the verbose grammar output file, generated by the tt(--verbose) option. it() loption(debug)nl() Provide tt(parse) and its support functions with debugging code, showing the actual parsing process on the standard output stream. When included, the debugging output is active by default, but its activity may be controlled using the tt(setDebug(bool on-off)) member. An tt(#ifdef DEBUG) macro is not supported by bic(). Rerun bic() without the tt(--debug) option to remove the debugging code. it() label(ERRORVERBOSE)loption(error-verbose)nl() When a syntactic error is reported, the generated parse function dumps the parser's state stack to the standard output stream. The stack dump shows on separate lines a stack index followed by the state stored at the indicated stack element. The first stack element is the stack's top element. it() lsoption(filenames)(f)(filename)nl() tt(Filename) is a generic file name that is used for all header files generated by bic(). Options defining specific file names are also available (which then, in turn, overrule the name specified by this option). it() loption(flex)nl() Bic() generates code calling tt(d_scanner.yylex()) to obtain the next lexical token, and calling tt(d_scanner.YYText()) for the matched text, unless overruled by options or directives explicitly defining these functions. By default, the interface defined by bf(flexc++)(1) is used. This option is only interpreted if the tt(--scanner) option or tt(%scanner) directive is also used. it() loption(help) (soption(h))nl() Write basic usage information to the standard output stream and terminate. it() lsoption(implementation-header)(i)(filename)nl() tt(Filename) defines the name of the file to contain the implementation header. It defaults to the name of the generated parser class plus the suffix tt(.ih). The implementation header should contain all directives and declarations em(only) used by the implementations of the parser's member functions. It is the only header file that is included by the source file containing tt(parse)'s implementation. User defined implementation of other class members may use the same convention, thus concentrating all directives and declarations that are required for the compilation of other source files belonging to the parser class in one header file. it() lsoption(implementation-skeleton)(I)(pathname)nl() tt(Pathname) defines the path name to the file containing the skeleton of the implementation header. t defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++.ih)). it() loption(insert-stype)nl() This option is only effective if the tt(debug) option (or tt(%debug) directive) has also been specified. When tt(insert-stype) has been specified the parsing function's debug output also shows selected semantic values. It should only be used if objects or variables of the semantic value type tt(STYPE__) can be inserted into tt(ostreams). it() label(MAXDEPTH) laoption(max-inclusion-depth)(value)nl() Set the maximum number of nested grammar files. Defaults to 10. it() loption(namespace) tt(identifier) nl() Define all of the code generated by bic() in the name space tt(identifier). By default no name space is defined. If this options is used the implementation header is provided with a commented out tt(using namespace) declaration for the specified name space. In addition, the parser and parser base class header files also use the specified namespace to define their include guard directives. It is an error if this option is used and an already existing parser-class header file and/or implementation header file does not define tt(namespace identifier). it() loption(no-baseclass-header)nl() Do not write the file containing the parser class' base class, even if that file doesn't yet exist. By default the file containing the parser's base class is (re)written each time bic() is called. Note that this option should normally be avoided, as the base class defines the symbolic terminal tokens that are returned by the lexical scanner. When the construction of this file is suppressed, modifications of these terminal tokens are not communicated to the lexical scanner. it() loption(no-decoration) (soption(D))nl() Do not include the user-defined actions when generating the parser's tt(parse) member. This effectively generates a parser which merely performs semantic checks, without performing the actions which are normally executed when rules have been matched. This may be useful in situations where a (partially or completely) decorated grammar is reorganized, and the syntactical correctness of the modified grammar must be verified, or in situations where the grammar has already been decorated, but functions which are called from the rules's actions have not yet been impleemented. it() loption(no-default-action-return) (soption(N))nl() Do not use the default tt($$ = $1) assignment of semantic values when returning from an action block. When this option is specified then bic() generates a warning for typed rules (non-terminals) whose action blocks do not provide an explicit tt($$) return value. it() loption(no-lines)nl() Do not put tt(#line) preprocessor directives in the file containing the parser's tt(parse) function. By default the file containing the parser's tt(parse) function also contains tt(#line) preprocessor directives. This option allows the compiler and debuggers to associate errors with lines in your grammar specification file, rather than with the source file containing the tt(parse) function itself. it() loption(no-parse-member)nl() Do not write the file containing the parser's predefined parser member functions, even if that file doesn't yet exist. By default the file containing the parser's tt(parse) member function is (re)written each time bic() is called. Note that this option should normally be avoided, as this file contains parsing tables which are altered whenever the grammar definition is modified. it() loption(own-debug)nl() Extensively displays the actions performed by bic()'s parser when it processes the grammar specification file(s). This implies the tt(--verbose) option. it() loption(own-tokens) (soption(T))nl() The tokens returned as well as the text matched when bic() reads its input files(s) are shown when this option is used. This option does em(not) result in the generated parsing function displaying returned tokens and matched text. If that is what you want, use the tt(--print-tokens) option. it() lsoption(parsefun-skeleton)(P)(pathname)nl() tt(Pathname) defines the path name of the file containing the parsing member function's skeleton. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++.cc)). it() lsoption(parsefun-source)(p)(filename)nl() tt(Filename) defines the name of the source file to contain the parser member function tt(parse). Defaults to tt(parse.cc). it() lsoption(polymorphic-skeleton)(M)(pathame)nl() tt(Pathname) defines the path name of the file containing the skeleton of the polymorphic template classes. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++polymorphic)). it() lsoption(polymorphic-inline-skeleton)(m)(pathname)nl() tt(Pathname) defines the path name of the file containing the skeleton of the inline implementations of the members of the polymorphic template classes. It defaults to the installation-defined default path name (e.g., tt(/usr/share/bisonc++/) plus tt(bisonc++polymorphic)). it() loption(print-tokens) (soption(t))nl() The generated parsing function implements a function tt(print__) displaying (on the standard output stream) the tokens returned by the parser's scanner as well as the corresponding matched text. This implementation is suppressed when the parsing function is generated without using this option. The member tt(print__)) is called from tt(Parser::print), which is defined in-line in the the parser's class header. Calling tt(Parser::print__) can thus easily be controlled from tt(print), using, e.g., a variable that set by the program using the parser generated by bic(). This option does em(not) show the tokens returned and text matched by bic() itself when it is reading its input file(s). If that is what you want, use the tt(--own-tokens) option. it() label(REQUIRED) laoption(required-tokens)(number)nl() Following a syntactic error, require at least tt(number) successfully processed tokens before another syntactic error can be reported. By default tt(number) is zero. it() label(SCANOPT) lsoption(scanner)(s)(pathname)nl() tt(Pathname) defines the path name to the file defining the scanner's class interface (e.g., tt("../scanner/scanner.h")). When this option is used the parser's member tt(int lex()) is predefined as verb( int Parser::lex() { return d_scanner.lex(); } ) and an object tt(Scanner d_scanner) is composed into the parser (but see also option tt(scanner-class-name)). The example shows the function that's called by default. When the tt(--flex) option (or tt(%flex) directive) is specified the function tt(d_scanner.yylex()) is called. Any other function to call can be specified using the tt(--scanner-token-function) option (or tt(%scanner-token-function) directive). By default bic() surrounds tt(pathname) by double quotes (using, e.g., tt(#include "pathname")). When tt(pathname) is surrounded by pointed brackets tt(#include ) is included. It is an error if this option is used and an already existing parser class header file does not include tt(`pathname'). it() loption(scanner-class-name) tt(scannerClassName) nl() Defines the name of the scanner class, declared by the tt(pathname) header file that is specified at the tt(scanner) option or directive. By default the class name tt(Scanner) is used. It is an error if this option is used and either the tt(scanner) option was not provided, or the parser class interface in an already existing parser class header file does not declare a scanner class tt(d_scanner) object. it() loption(scanner-debug)nl() Show de scanner's matched rules and returned tokens. This offers an extensive display of the rules and tokens matched and returned by bic()'s scanner, not of just the tokens and matched text received by bic(). If that is what you want use the tt(--own-tokens) option. it() laoption(scanner-matched-text-function)(function-call)nl() The scanner function returning the text that was matched at the last call of the scanner's token function. A complete function call expression should be provided (including a scanner object, if used). This option overrules the tt(d_scanner.matched()) call used by default when the tt(%scanner) directive is specified, and it overrules the tt(d_scanner.YYText()) call used when the tt(%flex) directive is provided. Example: verb( --scanner-matched-text-function "myScanner.matchedText()" ) it() laoption(scanner-token-function)(function-call)nl() The scanner function returning the next token, called from the parser's tt(lex) function. A complete function call expression should be provided (including a scanner object, if used). This option overrules the tt(d_scanner.lex()) call used by default when the tt(%scanner) directive is specified, and it overrules the tt(d_scanner.yylex()) call used when the tt(%flex) directive is provided. Example: verb( --scanner-token-function "myScanner.nextToken()" ) It is an error if this option is used and the scanner token function is not called from the code in an already existing implementation header. it() loption(show-filenames)nl() Writes the names of the generated files to the standard error stream. it() lsoption(skeleton-directory)(S)(directory)nl() Specifies the directory containing the skeleton files. This option can be overridden by the specific skeleton-specifying options (tt(-B -C, -H, -I, -M) and tt(-m)). it() laoption(target-directory)(pathname) nl() tt(Pathname) defines the directory where generated files should be written. By default this is the directory where bic() is called. it() loption(thread-safe)nl() No static data are modified, making bic() thread-safe. it() loption(usage)nl() Write basic usage information to the standard output stream and terminate. it() loption(verbose) (soption(V))nl() Write a file containing verbose descriptions of the parser states and what is done for each type of look-ahead token in that state. This file also describes all conflicts detected in the grammar, both those resolved by operator precedence and those that remain unresolved. It is not created by default, but if requested the information is written on tt(.output), where tt() is the grammar specification file passed to bic(). it() loption(version) (soption(v))nl() Display bic()'s version number and terminate. ) bisonc++-4.13.01/documentation/manual/invoking/usage.yo0000644000175000017500000000065312633316117021705 0ustar frankfrankWhen b() is called without any arguments it generates the following usage information: COMMENT(The usage file should be removed by cleanup, in order to make sure that it is created using the latest bisonc++ version) TYPEOUT( If a message about a failing NOEXPANDINCLUDE is shown, create the file `usage' in documentation/manual/invoking containing the latest usage info ) verbinclude(usage.txt) bisonc++-4.13.01/documentation/manual/concepts.yo0000644000175000017500000000114412633316117020567 0ustar frankfrankincludefile(concepts/intro) lsect(LANGUAGES)(Languages and Context-Free Grammars) includefile(concepts/languages) sect(From Formal Rules to Bisonc++ Input) includefile(concepts/formal) sect(Semantic Values) includefile(concepts/semantic) sect(Semantic Actions) includefile(concepts/actions) sect(Bisonc++ output: the Parser class) includefile(concepts/output) subsect(Bisonc++: an optionally reentrant Parser) includefile(concepts/reentrant.yo) sect(Stages in Using Bisonc++) includefile(concepts/stages) lsect(LAYOUT)(The Overall Layout of a Bisonc++ Grammar File) includefile(concepts/layout) bisonc++-4.13.01/documentation/manual/preamble.yo0000644000175000017500000001004512633316117020540 0ustar frankfrankNOUSERMACRO(terminal nonterminal member token files) DEFINEMACRO(manual)(0)() SUBST(BSSP)(\ ) SUBST(MYEMAIL)(f.b.brokken@rug.nl) SUBST(MAILTO)(CHAR(109)ailto:MYEMAIL) SUBST(URLEMAIL)(url(email)(MAILTO)) SUBST(OPENPAR)(CHAR(40)) SUBST(CLOSEPAR)(CHAR(41)) SUBST(AFFILIATION)(\ Center for Information Technology,nl() University of Groningen nl()\ Nettelbosje 1,nl()\ P.O. Box 11044,nl()\ 9700 CA Groningen nl()\ The Netherlands nl()) sethtmlfigureext(.gif) COMMENT(While converting figures to .jpg) IFDEF(latex) (\ latexoptions(a4paper,twoside)\ latexpackage(latin1)(inputenc)\ latexpackage()(makeidx)\ COMMENT( latexpackage()(pandora)\ latexpackage()(bookman)\ )\ latexpackage()(newcent)\ latexpackage()(epsf)\ IFDEF(us)(\ latexpackage()(cplusplusus)\ )(\ latexpackage()(cplusplus)\ )\ makeindex()\ sloppyhfuzz(70)\ noxlatin()\ latexlayoutcmds(\setcounter{secnumdepth}{3})\ latexlayoutcmds(\pagestyle{headings})\ )() DEFINEMACRO(itx)(0)(it()) DEFINEMACRO(itemlist)(1)(ARG1) DEFINEMACRO(tr)(3)(\ row(cell(ARG1)cell(verb( ))\ htmlcommand()ARG2+htmlcommand()cell(verb( ))\ cell(ARG3))) DEFINEMACRO(turl)(2)(\ IFDEF(html)\ (htmlcommand()ARG1+htmlcommand())\ (url(ARG1)(ARG2))) DEFINEMACRO(tlurl)(1)(\ IFDEF(html)\ (htmlcommand()ARG1+htmlcommand())\ (lurl(ARG1))) DEFINEMACRO(lshift)(0)(\ IFDEF(latex)(\ NOTRANS($<<$)\ )(\ NOTRANS(<<)\ )\ ) DEFINEMACRO(verbinsert)(2)(\ PIPETHROUGH(yodlverbinsert ARG1 ARG2)()\ ) DEFINEMACRO(rshift)(0)(\ IFDEF(latex)(\ NOTRANS($>>$)\ )(\ NOTRANS(>>)\ )\ ) DEFINEMACRO(oplshift)(0)(\ tt(operator)\ IFDEF(latex)(\ NOTRANS($<<$)\ )(\ NOTRANS(<<)\ )\ tt(())\ ) DEFINEMACRO(oprshift)(0)(\ tt(operator)\ IFDEF(latex)(\ NOTRANS($>>$)\ )(\ NOTRANS(>>)\ )\ tt(())\ ) DEFINEMACRO(decrement)(0)(\ IFDEF(latex)(\ NOTRANS($--$)\ )(\ NOTRANS(--)\ )\ ) DEFINEMACRO(opdecrement)(0)(\ tt(operator)\ IFDEF(latex)(\ NOTRANS($--$)\ )(\ NOTRANS(--)\ )\ tt(())\ ) DEFINEMACRO(iopdecrement)(0)(\ hix(operator--())\ opdecrement()\ ) DEFINECOUNTER(htmlAnchor)(0) def(x)(1)(IFDEF(html)(htmlcommand())()ARG1) def(linkit)(2)(it()link(Chapter )(ARG1)ref(ARG1)link(: ARG2.)(ARG1)) def(itt)(1)(it()tt(ARG1)) def(centt)(1)(\ verb( ARG1)\ ) def(rangett)(1)(tt(CHAR(91)ARG1+CHAR(41))) def(endOfFile)(1)(tt(CHAR(69)CHAR(79)CHAR(70))) def(c)(1)(COMMENT(ARG1)) def(hix)(1)(\ IFDEF(html)(\ label(an+USECOUNTER(htmlAnchor))\ htmlcommand( )\ )(\ IFDEF(latex)(\ latexcommand(\index{)\ ARG1\ +latexcommand(})\ )()\ )\ ) def(fst)(1)(tt(FIRST(ARG1))) def(flw)(1)(tt(FOLLOW(ARG1))) def(fstrow)(3)(row(cell(ARG1)cell(ARG2)cell( )cell(ARG3))) def(ttrow)(4)(row(cell(tt(ARG1))cell()cell(tt(ARG2))cell()cell(tt(ARG3)) cell()cell(ARG4))) def(hi)(1)(hix(ARG1)) def(i)(1)(hix(ARG1)ARG1) def(ti)(1)(hix(ARG1)tt(ARG1)) def(bi)(1)(hix(ARG1)bf(ARG1)) def(emi)(1)(hix(ARG1)em(ARG1)) def(iti)(1)(it()ti(ARG1)) def(rangeti)(1)(ti(CHAR(91)ARG1+CHAR(41))) def(itht)(2)(it()hix(ARG1)tt(ARG2)) def(ittq)(2)(it()ti(ARG1):quote(ARG2)) def(ithtq)(3)(it()hix(ARG1)tt(ARG2):quote(ARG3)) DEFINEMACRO(laoption)(2)(\ bf(--ARG1)=tt(ARG2)\ ) DEFINEMACRO(lsoption)(3)(\ bf(--ARG1)=tt(ARG3) (bf(-ARG2))\ ) DEFINEMACRO(loption)(1)(\ bf(--ARG1)\ ) DEFINEMACRO(soption)(1)(\ bf(-ARG1)\ ) DEFINEMACRO(bic)(0)(bf(bisonc++)) DEFINEMACRO(Bic)(0)(bf(Bisonc++)) DEFINEMACRO(epsilon)(0)(\ IFDEF(html)(\ htmlcommand(ε) )\ (\ IFDEF(latex)(\ latexcommand(\epsilon)\ )\ (e)\ )\ ) bisonc++-4.13.01/documentation/manual/introduction.yo0000644000175000017500000000351012633316117021471 0ustar frankfrankB() is a general-purpose parser generator that converts a grammar description for an LALR(1) context-free grammar into a bf(C++) class to parse that grammar. Once you are proficient with b(), you may use it to develop a wide range of language parsers, from those used in simple desk calculators to complex programming languages. B() is highly comparable to the program bison++, written by Alain Coetmeur: all properly-written bison++ grammars ought to be convertible to b() grammars after very little or no change. Anyone familiar with bison++ or its precursor, bison, should be able to use b() with little trouble. You need to be fluent in using the bf(C++) programming in order to use b() or to understand this manual. This manual closely resembles bf(bison)(1)'s userguide. In fact, many sections of that manual were copied straight into this manual. With b() distributions (both the full source distribution and the binary tt(.deb) distributions) bf(bison)'s orginal manual is included in both em(PostScript) and (converted from the tt(texi) format) tt(HTML) format. Where necessary sections of the original manual were adapted to b()'s characteristics. Some sections were removed, some new sections were added to the current manual. Expect upgrades of the manual to appear without further notice. Upgrades will be announced in the manual's title. The current manual starts with tutorial chapters that explain the basic concepts of using b() and show three explained examples, each building on its previous (where available). If you don't know b(), bison++ or bison, start by reading these chapters. Reference chapters follow which describe specific aspects of the program b() in detail. B() was designed and built by url(Frank B. Brokken)(mailto:f.b.brokken@rug.nl). The program's initial release was constructed between November 2004 and May 2005. bisonc++-4.13.01/documentation/usage/0000755000175000017500000000000012634776615016245 5ustar frankfrankbisonc++-4.13.01/documentation/usage/usage.cc0000644000175000017500000000036412633316117017645 0ustar frankfrank#include #include using namespace std; #include "../../VERSION" char version[] = VERSION; char year[] = YEARS; #define _INCLUDED_BISONCPP_H_ #include "../../usage.cc" int main() { usage("bisonc++"); return 0; } bisonc++-4.13.01/documentation/html/0000755000175000017500000000000012633316117016067 5ustar frankfrankbisonc++-4.13.01/documentation/html/bison_1.html0000644000175000017500000001022512633316117020307 0ustar frankfrank Bison 2.21.5: Introduction
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Introduction

Bison is a general-purpose parser generator that converts a grammar description for an LALR(1) context-free grammar into a C program to parse that grammar. Once you are proficient with Bison, you may use it to develop a wide range of language parsers, from those used in simple desk calculators to complex programming languages.

Bison is upward compatible with Yacc: all properly-written Yacc grammars ought to work with Bison with no change. Anyone familiar with Yacc should be able to use Bison with little trouble. You need to be fluent in C programming in order to use Bison or to understand this manual.

We begin with tutorial chapters that explain the basic concepts of using Bison and show three explained examples, each building on the last. If you don't know Bison or Yacc, start by reading these chapters. Reference chapters follow which describe specific aspects of Bison in detail.

Bison was written primarily by Robert Corbett; Richard Stallman made it Yacc-compatible. Wilfred Hansen of Carnegie Mellon University added multicharacter string literals and other features.

This edition corresponds to version 2.21.5 of Bison.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_9.html0000644000175000017500000002076312633316117020327 0ustar frankfrank Bison 2.21.5: Error Recovery
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6. Error Recovery

It is not usually acceptable to have a program terminate on a parse error. For example, a compiler should recover sufficiently to parse the rest of the input file and check it for errors; a calculator should accept another expression.

In a simple interactive command parser where each input is one line, it may be sufficient to allow yyparse to return 1 on error and have the caller ignore the rest of the input line when that happens (and then call yyparse again). But this is inadequate for a compiler, because it forgets all the syntactic context leading up to the error. A syntax error deep within a function in the compiler input should not cause the compiler to treat the following line like the beginning of a source file.

You can define how to recover from a syntax error by writing rules to recognize the special token error. This is a terminal symbol that is always defined (you need not declare it) and reserved for error handling. The Bison parser generates an error token whenever a syntax error happens; if you have provided a rule to recognize this token in the current context, the parse can continue.

For example:

 
stmnts:  /* empty string */
        | stmnts '\n'
        | stmnts exp '\n'
        | stmnts error '\n'

The fourth rule in this example says that an error followed by a newline makes a valid addition to any stmnts.

What happens if a syntax error occurs in the middle of an exp? The error recovery rule, interpreted strictly, applies to the precise sequence of a stmnts, an error and a newline. If an error occurs in the middle of an exp, there will probably be some additional tokens and subexpressions on the stack after the last stmnts, and there will be tokens to read before the next newline. So the rule is not applicable in the ordinary way.

But Bison can force the situation to fit the rule, by discarding part of the semantic context and part of the input. First it discards states and objects from the stack until it gets back to a state in which the error token is acceptable. (This means that the subexpressions already parsed are discarded, back to the last complete stmnts.) At this point the error token can be shifted. Then, if the old look-ahead token is not acceptable to be shifted next, the parser reads tokens and discards them until it finds a token which is acceptable. In this example, Bison reads and discards input until the next newline so that the fourth rule can apply.

The choice of error rules in the grammar is a choice of strategies for error recovery. A simple and useful strategy is simply to skip the rest of the current input line or current statement if an error is detected:

 
stmnt: error ';'  /* on error, skip until ';' is read */

It is also useful to recover to the matching close-delimiter of an opening-delimiter that has already been parsed. Otherwise the close-delimiter will probably appear to be unmatched, and generate another, spurious error message:

 
primary:  '(' expr ')'
        | '(' error ')'
        ...
        ;

Error recovery strategies are necessarily guesses. When they guess wrong, one syntax error often leads to another. In the above example, the error recovery rule guesses that an error is due to bad input within one stmnt. Suppose that instead a spurious semicolon is inserted in the middle of a valid stmnt. After the error recovery rule recovers from the first error, another syntax error will be found straightaway, since the text following the spurious semicolon is also an invalid stmnt.

To prevent an outpouring of error messages, the parser will output no error message for another syntax error that happens shortly after the first; only after three consecutive input tokens have been successfully shifted will error messages resume.

Note that rules which accept the error token may have actions, just as any other rules can.

You can make error messages resume immediately by using the macro yyerrok in an action. If you do this in the error rule's action, no error messages will be suppressed. This macro requires no arguments; `yyerrok;' is a valid C statement.

The previous look-ahead token is reanalyzed immediately after an error. If this is unacceptable, then the macro yyclearin may be used to clear this token. Write the statement `yyclearin;' in the error rule's action.

For example, suppose that on a parse error, an error handling routine is called that advances the input stream to some point where parsing should once again commence. The next symbol returned by the lexical scanner is probably correct. The previous look-ahead token ought to be discarded with `yyclearin;'.

The macro YYRECOVERING stands for an expression that has the value 1 when the parser is recovering from a syntax error, and 0 the rest of the time. A value of 1 indicates that error messages are currently suppressed for new syntax errors.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_5.html0000644000175000017500000016406412633316117020326 0ustar frankfrank Bison 2.21.5: Examples
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2. Examples

Now we show and explain three sample programs written using Bison: a reverse polish notation calculator, an algebraic (infix) notation calculator, and a multi-function calculator. All three have been tested under BSD Unix 4.3; each produces a usable, though limited, interactive desk-top calculator.

These examples are simple, but Bison grammars for real programming languages are written the same way. You can copy these examples out of the Info file and into a source file to try them.

2.1 Reverse Polish Notation Calculator  Reverse polish notation calculator; a first example with no operator precedence.
2.2 Infix Notation Calculator: calc  Infix (algebraic) notation calculator. Operator precedence is introduced.
2.3 Simple Error Recovery  Continuing after syntax errors.
2.4 Multi-Function Calculator: mfcalc  Calculator with memory and trig functions. It uses multiple data-types for semantic values.
2.5 Exercises  Ideas for improving the multi-function calculator.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1 Reverse Polish Notation Calculator

The first example is that of a simple double-precision reverse polish notation calculator (a calculator using postfix operators). This example provides a good starting point, since operator precedence is not an issue. The second example will illustrate how operator precedence is handled.

The source code for this calculator is named `rpcalc.y'. The `.y' extension is a convention used for Bison input files.

2.1.1 Declarations for rpcalc  Bison and C declarations for rpcalc.
2.1.2 Grammar Rules for rpcalc  Grammar Rules for rpcalc, with explanation.
2.1.3 The rpcalc Lexical Analyzer  The lexical analyzer.
2.1.4 The Controlling Function  The controlling function.
2.1.5 The Error Reporting Routine  The error reporting function.
2.1.6 Running Bison to Make the Parser  Running Bison on the grammar file.
2.1.7 Compiling the Parser File  Run the C compiler on the output code.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.1 Declarations for rpcalc

Here are the C and Bison declarations for the reverse polish notation calculator. As in C, comments are placed between `/*...*/'.

 
/* Reverse polish notation calculator. */

%{
#define YYSTYPE double
#include <math.h>
%}

%token NUM

%% /* Grammar rules and actions follow */

The C declarations section (see section The C Declarations Section) contains two preprocessor directives.

The #define directive defines the macro YYSTYPE, thus specifying the C data type for semantic values of both tokens and groupings (see section Data Types of Semantic Values). The Bison parser will use whatever type YYSTYPE is defined as; if you don't define it, int is the default. Because we specify double, each token and each expression has an associated value, which is a floating point number.

The #include directive is used to declare the exponentiation function pow.

The second section, Bison declarations, provides information to Bison about the token types (see section The Bison Declarations Section). Each terminal symbol that is not a single-character literal must be declared here. (Single-character literals normally don't need to be declared.) In this example, all the arithmetic operators are designated by single-character literals, so the only terminal symbol that needs to be declared is NUM, the token type for numeric constants.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.2 Grammar Rules for rpcalc

Here are the grammar rules for the reverse polish notation calculator.

 
input:    /* empty */
        | input line
;

line:     '\n'
        | exp '\n'  { printf ("\t%.10g\n", $1); }
;

exp:      NUM             { $$ = $1;         }
        | exp exp '+'     { $$ = $1 + $2;    }
        | exp exp '-'     { $$ = $1 - $2;    }
        | exp exp '*'     { $$ = $1 * $2;    }
        | exp exp '/'     { $$ = $1 / $2;    }
      /* Exponentiation */
        | exp exp '^'     { $$ = pow ($1, $2); }
      /* Unary minus    */
        | exp 'n'         { $$ = -$1;        }
;
%%

The groupings of the rpcalc "language" defined here are the expression (given the name exp), the line of input (line), and the complete input transcript (input). Each of these nonterminal symbols has several alternate rules, joined by the `|' punctuator which is read as "or". The following sections explain what these rules mean.

The semantics of the language is determined by the actions taken when a grouping is recognized. The actions are the C code that appears inside braces. See section 3.5.3 Actions.

You must specify these actions in C, but Bison provides the means for passing semantic values between the rules. In each action, the pseudo-variable $$ stands for the semantic value for the grouping that the rule is going to construct. Assigning a value to $$ is the main job of most actions. The semantic values of the components of the rule are referred to as $1, $2, and so on.

2.1.2.1 Explanation of input  
2.1.2.2 Explanation of line  
2.1.2.3 Explanation of expr  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.2.1 Explanation of input

Consider the definition of input:

 
input:    /* empty */
        | input line
;

This definition reads as follows: "A complete input is either an empty string, or a complete input followed by an input line". Notice that "complete input" is defined in terms of itself. This definition is said to be left recursive since input appears always as the leftmost symbol in the sequence. See section Recursive Rules.

The first alternative is empty because there are no symbols between the colon and the first `|'; this means that input can match an empty string of input (no tokens). We write the rules this way because it is legitimate to type Ctrl-d right after you start the calculator. It's conventional to put an empty alternative first and write the comment `/* empty */' in it.

The second alternate rule (input line) handles all nontrivial input. It means, "After reading any number of lines, read one more line if possible." The left recursion makes this rule into a loop. Since the first alternative matches empty input, the loop can be executed zero or more times.

The parser function yyparse continues to process input until a grammatical error is seen or the lexical analyzer says there are no more input tokens; we will arrange for the latter to happen at end of file.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.2.2 Explanation of line

Now consider the definition of line:

 
line:     '\n'
        | exp '\n'  { printf ("\t%.10g\n", $1); }
;

The first alternative is a token which is a newline character; this means that rpcalc accepts a blank line (and ignores it, since there is no action). The second alternative is an expression followed by a newline. This is the alternative that makes rpcalc useful. The semantic value of the exp grouping is the value of $1 because the exp in question is the first symbol in the alternative. The action prints this value, which is the result of the computation the user asked for.

This action is unusual because it does not assign a value to $$. As a consequence, the semantic value associated with the line is uninitialized (its value will be unpredictable). This would be a bug if that value were ever used, but we don't use it: once rpcalc has printed the value of the user's input line, that value is no longer needed.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.2.3 Explanation of expr

The exp grouping has several rules, one for each kind of expression. The first rule handles the simplest expressions: those that are just numbers. The second handles an addition-expression, which looks like two expressions followed by a plus-sign. The third handles subtraction, and so on.

 
exp:      NUM
        | exp exp '+'     { $$ = $1 + $2;    }
        | exp exp '-'     { $$ = $1 - $2;    }
        ...
        ;

We have used `|' to join all the rules for exp, but we could equally well have written them separately:

 
exp:      NUM ;
exp:      exp exp '+'     { $$ = $1 + $2;    } ;
exp:      exp exp '-'     { $$ = $1 - $2;    } ;
        ...

Most of the rules have actions that compute the value of the expression in terms of the value of its parts. For example, in the rule for addition, $1 refers to the first component exp and $2 refers to the second one. The third component, '+', has no meaningful associated semantic value, but if it had one you could refer to it as $3. When yyparse recognizes a sum expression using this rule, the sum of the two subexpressions' values is produced as the value of the entire expression. See section 3.5.3 Actions.

You don't have to give an action for every rule. When a rule has no action, Bison by default copies the value of $1 into $$. This is what happens in the first rule (the one that uses NUM).

The formatting shown here is the recommended convention, but Bison does not require it. You can add or change whitespace as much as you wish. For example, this:

 
exp   : NUM | exp exp '+' {$$ = $1 + $2; } | ...

means the same thing as this:

 
exp:      NUM
        | exp exp '+'    { $$ = $1 + $2; }
        | ...

The latter, however, is much more readable.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.3 The rpcalc Lexical Analyzer

The lexical analyzer's job is low-level parsing: converting characters or sequences of characters into tokens. The Bison parser gets its tokens by calling the lexical analyzer. See section The Lexical Analyzer Function yylex.

Only a simple lexical analyzer is needed for the RPN calculator. This lexical analyzer skips blanks and tabs, then reads in numbers as double and returns them as NUM tokens. Any other character that isn't part of a number is a separate token. Note that the token-code for such a single-character token is the character itself.

The return value of the lexical analyzer function is a numeric code which represents a token type. The same text used in Bison rules to stand for this token type is also a C expression for the numeric code for the type. This works in two ways. If the token type is a character literal, then its numeric code is the ASCII code for that character; you can use the same character literal in the lexical analyzer to express the number. If the token type is an identifier, that identifier is defined by Bison as a C macro whose definition is the appropriate number. In this example, therefore, NUM becomes a macro for yylex to use.

The semantic value of the token (if it has one) is stored into the global variable yylval, which is where the Bison parser will look for it. (The C data type of yylval is YYSTYPE, which was defined at the beginning of the grammar; see section Declarations for rpcalc.)

A token type code of zero is returned if the end-of-file is encountered. (Bison recognizes any nonpositive value as indicating the end of the input.)

Here is the code for the lexical analyzer:

 
/* Lexical analyzer returns a double floating point 
   number on the stack and the token NUM, or the ASCII
   character read if not a number.  Skips all blanks
   and tabs, returns 0 for EOF. */

#include <ctype.h>

yylex ()
{
  int c;

  /* skip white space  */
  while ((c = getchar ()) == ' ' || c == '\t')  
    ;
  /* process numbers   */
  if (c == '.' || isdigit (c))                
    {
      ungetc (c, stdin);
      scanf ("%lf", &yylval);
      return NUM;
    }
  /* return end-of-file  */
  if (c == EOF)                            
    return 0;
  /* return single chars */
  return c;                                
}


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.4 The Controlling Function

In keeping with the spirit of this example, the controlling function is kept to the bare minimum. The only requirement is that it call yyparse to start the process of parsing.

 
main ()
{
  yyparse ();
}


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.5 The Error Reporting Routine

When yyparse detects a syntax error, it calls the error reporting function yyerror to print an error message (usually but not always "parse error"). It is up to the programmer to supply yyerror (see section Parser C-Language Interface), so here is the definition we will use:

 
#include <stdio.h>

yyerror (s)  /* Called by yyparse on error */
     char *s;
{
  printf ("%s\n", s);
}

After yyerror returns, the Bison parser may recover from the error and continue parsing if the grammar contains a suitable error rule (see section 6. Error Recovery). Otherwise, yyparse returns nonzero. We have not written any error rules in this example, so any invalid input will cause the calculator program to exit. This is not clean behavior for a real calculator, but it is adequate in the first example.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.6 Running Bison to Make the Parser

Before running Bison to produce a parser, we need to decide how to arrange all the source code in one or more source files. For such a simple example, the easiest thing is to put everything in one file. The definitions of yylex, yyerror and main go at the end, in the "additional C code" section of the file (see section The Overall Layout of a Bison Grammar).

For a large project, you would probably have several source files, and use make to arrange to recompile them.

With all the source in a single file, you use the following command to convert it into a parser file:

 
bison file_name.y

In this example the file was called `rpcalc.y' (for "Reverse Polish CALCulator"). Bison produces a file named `file_name.tab.c', removing the `.y' from the original file name. The file output by Bison contains the source code for yyparse. The additional functions in the input file (yylex, yyerror and main) are copied verbatim to the output.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1.7 Compiling the Parser File

Here is how to compile and run the parser file:

 
# List files in current directory.
% ls
rpcalc.tab.c  rpcalc.y

# Compile the Bison parser.
# `-lm' tells compiler to search math library for pow.
% cc rpcalc.tab.c -lm -o rpcalc

# List files again.
% ls
rpcalc  rpcalc.tab.c  rpcalc.y

The file `rpcalc' now contains the executable code. Here is an example session using rpcalc.

 
% rpcalc
4 9 +
13
3 7 + 3 4 5 *+-
-13
3 7 + 3 4 5 * + - n              Note the unary minus, `n'
13
5 6 / 4 n +
-3.166666667
3 4 ^                            Exponentiation
81
^D                               End-of-file indicator
%


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.2 Infix Notation Calculator: calc

We now modify rpcalc to handle infix operators instead of postfix. Infix notation involves the concept of operator precedence and the need for parentheses nested to arbitrary depth. Here is the Bison code for `calc.y', an infix desk-top calculator.

 
/* Infix notation calculator--calc */

%{
#define YYSTYPE double
#include <math.h>
%}

/* BISON Declarations */
%token NUM
%left '-' '+'
%left '*' '/'
%left NEG     /* negation--unary minus */
%right '^'    /* exponentiation        */

/* Grammar follows */
%%
input:    /* empty string */
        | input line
;

line:     '\n'
        | exp '\n'  { printf ("\t%.10g\n", $1); }
;

exp:      NUM                { $$ = $1;         }
        | exp '+' exp        { $$ = $1 + $3;    }
        | exp '-' exp        { $$ = $1 - $3;    }
        | exp '*' exp        { $$ = $1 * $3;    }
        | exp '/' exp        { $$ = $1 / $3;    }
        | '-' exp  %prec NEG { $$ = -$2;        }
        | exp '^' exp        { $$ = pow ($1, $3); }
        | '(' exp ')'        { $$ = $2;         }
;
%%

The functions yylex, yyerror and main can be the same as before.

There are two important new features shown in this code.

In the second section (Bison declarations), %left declares token types and says they are left-associative operators. The declarations %left and %right (right associativity) take the place of %token which is used to declare a token type name without associativity. (These tokens are single-character literals, which ordinarily don't need to be declared. We declare them here to specify the associativity.)

Operator precedence is determined by the line ordering of the declarations; the higher the line number of the declaration (lower on the page or screen), the higher the precedence. Hence, exponentiation has the highest precedence, unary minus (NEG) is next, followed by `*' and `/', and so on. See section Operator Precedence.

The other important new feature is the %prec in the grammar section for the unary minus operator. The %prec simply instructs Bison that the rule `| '-' exp' has the same precedence as NEG---in this case the next-to-highest. See section Context-Dependent Precedence.

Here is a sample run of `calc.y':

 
% calc
4 + 4.5 - (34/(8*3+-3))
6.880952381
-56 + 2
-54
3 ^ 2
9


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.3 Simple Error Recovery

Up to this point, this manual has not addressed the issue of error recovery---how to continue parsing after the parser detects a syntax error. All we have handled is error reporting with yyerror. Recall that by default yyparse returns after calling yyerror. This means that an erroneous input line causes the calculator program to exit. Now we show how to rectify this deficiency.

The Bison language itself includes the reserved word error, which may be included in the grammar rules. In the example below it has been added to one of the alternatives for line:

 
line:     '\n'
        | exp '\n'   { printf ("\t%.10g\n", $1); }
        | error '\n' { yyerrok;                  }
;

This addition to the grammar allows for simple error recovery in the event of a parse error. If an expression that cannot be evaluated is read, the error will be recognized by the third rule for line, and parsing will continue. (The yyerror function is still called upon to print its message as well.) The action executes the statement yyerrok, a macro defined automatically by Bison; its meaning is that error recovery is complete (see section 6. Error Recovery). Note the difference between yyerrok and yyerror; neither one is a misprint.

This form of error recovery deals with syntax errors. There are other kinds of errors; for example, division by zero, which raises an exception signal that is normally fatal. A real calculator program must handle this signal and use longjmp to return to main and resume parsing input lines; it would also have to discard the rest of the current line of input. We won't discuss this issue further because it is not specific to Bison programs.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.4 Multi-Function Calculator: mfcalc

Now that the basics of Bison have been discussed, it is time to move on to a more advanced problem. The above calculators provided only five functions, `+', `-', `*', `/' and `^'. It would be nice to have a calculator that provides other mathematical functions such as sin, cos, etc.

It is easy to add new operators to the infix calculator as long as they are only single-character literals. The lexical analyzer yylex passes back all non-number characters as tokens, so new grammar rules suffice for adding a new operator. But we want something more flexible: built-in functions whose syntax has this form:

 
function_name (argument)

At the same time, we will add memory to the calculator, by allowing you to create named variables, store values in them, and use them later. Here is a sample session with the multi-function calculator:

 
% mfcalc
pi = 3.141592653589
3.1415926536
sin(pi)
0.0000000000
alpha = beta1 = 2.3
2.3000000000
alpha
2.3000000000
ln(alpha)
0.8329091229
exp(ln(beta1))
2.3000000000
%

Note that multiple assignment and nested function calls are permitted.

2.4.1 Declarations for mfcalc  Bison declarations for multi-function calculator.
2.4.2 Grammar Rules for mfcalc  Grammar rules for the calculator.
2.4.3 The mfcalc Symbol Table  Symbol table management subroutines.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.4.1 Declarations for mfcalc

Here are the C and Bison declarations for the multi-function calculator.

 
%{
#include <math.h>  /* For math functions, cos(), sin(), etc. */
#include "calc.h"  /* Contains definition of `symrec'        */
%}
%union {
double     val;  /* For returning numbers.                   */
symrec  *tptr;   /* For returning symbol-table pointers      */
}

%token <val>  NUM        /* Simple double precision number   */
%token <tptr> VAR FNCT   /* Variable and Function            */
%type  <val>  exp

%right '='
%left '-' '+'
%left '*' '/'
%left NEG     /* Negation--unary minus */
%right '^'    /* Exponentiation        */

/* Grammar follows */

%%

The above grammar introduces only two new features of the Bison language. These features allow semantic values to have various data types (see section More Than One Value Type).

The %union declaration specifies the entire list of possible types; this is instead of defining YYSTYPE. The allowable types are now double-floats (for exp and NUM) and pointers to entries in the symbol table. See section The Collection of Value Types.

Since values can now have various types, it is necessary to associate a type with each grammar symbol whose semantic value is used. These symbols are NUM, VAR, FNCT, and exp. Their declarations are augmented with information about their data type (placed between angle brackets).

The Bison construct %type is used for declaring nonterminal symbols, just as %token is used for declaring token types. We have not used %type before because nonterminal symbols are normally declared implicitly by the rules that define them. But exp must be declared explicitly so we can specify its value type. See section Nonterminal Symbols.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.4.2 Grammar Rules for mfcalc

Here are the grammar rules for the multi-function calculator. Most of them are copied directly from calc; three rules, those which mention VAR or FNCT, are new.

 
input:   /* empty */
        | input line
;

line:
          '\n'
        | exp '\n'   { printf ("\t%.10g\n", $1); }
        | error '\n' { yyerrok;                  }
;

exp:      NUM                { $$ = $1;                         }
        | VAR                { $$ = $1->value.var;              }
        | VAR '=' exp        { $$ = $3; $1->value.var = $3;     }
        | FNCT '(' exp ')'   { $$ = (*($1->value.fnctptr))($3); }
        | exp '+' exp        { $$ = $1 + $3;                    }
        | exp '-' exp        { $$ = $1 - $3;                    }
        | exp '*' exp        { $$ = $1 * $3;                    }
        | exp '/' exp        { $$ = $1 / $3;                    }
        | '-' exp  %prec NEG { $$ = -$2;                        }
        | exp '^' exp        { $$ = pow ($1, $3);               }
        | '(' exp ')'        { $$ = $2;                         }
;
/* End of grammar */
%%


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.4.3 The mfcalc Symbol Table

The multi-function calculator requires a symbol table to keep track of the names and meanings of variables and functions. This doesn't affect the grammar rules (except for the actions) or the Bison declarations, but it requires some additional C functions for support.

The symbol table itself consists of a linked list of records. Its definition, which is kept in the header `calc.h', is as follows. It provides for either functions or variables to be placed in the table.

 
/* Data type for links in the chain of symbols.      */
struct symrec
{
  char *name;  /* name of symbol                     */
  int type;    /* type of symbol: either VAR or FNCT */
  union {
    double var;           /* value of a VAR          */
    double (*fnctptr)();  /* value of a FNCT         */
  } value;
  struct symrec *next;    /* link field              */
};

typedef struct symrec symrec;

/* The symbol table: a chain of `struct symrec'.     */
extern symrec *sym_table;

symrec *putsym ();
symrec *getsym ();

The new version of main includes a call to init_table, a function that initializes the symbol table. Here it is, and init_table as well:

 
#include <stdio.h>

main ()
{
  init_table ();
  yyparse ();
}

yyerror (s)  /* Called by yyparse on error */
     char *s;
{
  printf ("%s\n", s);
}

struct init
{
  char *fname;
  double (*fnct)();
};

struct init arith_fncts[]
  = {
      "sin", sin,
      "cos", cos,
      "atan", atan,
      "ln", log,
      "exp", exp,
      "sqrt", sqrt,
      0, 0
    };

/* The symbol table: a chain of `struct symrec'.  */
symrec *sym_table = (symrec *)0;

init_table ()  /* puts arithmetic functions in table. */
{
  int i;
  symrec *ptr;
  for (i = 0; arith_fncts[i].fname != 0; i++)
    {
      ptr = putsym (arith_fncts[i].fname, FNCT);
      ptr->value.fnctptr = arith_fncts[i].fnct;
    }
}

By simply editing the initialization list and adding the necessary include files, you can add additional functions to the calculator.

Two important functions allow look-up and installation of symbols in the symbol table. The function putsym is passed a name and the type (VAR or FNCT) of the object to be installed. The object is linked to the front of the list, and a pointer to the object is returned. The function getsym is passed the name of the symbol to look up. If found, a pointer to that symbol is returned; otherwise zero is returned.

 
symrec *
putsym (sym_name,sym_type)
     char *sym_name;
     int sym_type;
{
  symrec *ptr;
  ptr = (symrec *) malloc (sizeof (symrec));
  ptr->name = (char *) malloc (strlen (sym_name) + 1);
  strcpy (ptr->name,sym_name);
  ptr->type = sym_type;
  ptr->value.var = 0; /* set value to 0 even if fctn.  */
  ptr->next = (struct symrec *)sym_table;
  sym_table = ptr;
  return ptr;
}

symrec *
getsym (sym_name)
     char *sym_name;
{
  symrec *ptr;
  for (ptr = sym_table; ptr != (symrec *) 0;
       ptr = (symrec *)ptr->next)
    if (strcmp (ptr->name,sym_name) == 0)
      return ptr;
  return 0;
}

The function yylex must now recognize variables, numeric values, and the single-character arithmetic operators. Strings of alphanumeric characters with a leading nondigit are recognized as either variables or functions depending on what the symbol table says about them.

The string is passed to getsym for look up in the symbol table. If the name appears in the table, a pointer to its location and its type (VAR or FNCT) is returned to yyparse. If it is not already in the table, then it is installed as a VAR using putsym. Again, a pointer and its type (which must be VAR) is returned to yyparse.

No change is needed in the handling of numeric values and arithmetic operators in yylex.

 
#include <ctype.h>
yylex ()
{
  int c;

  /* Ignore whitespace, get first nonwhite character.  */
  while ((c = getchar ()) == ' ' || c == '\t');

  if (c == EOF)
    return 0;

  /* Char starts a number => parse the number.         */
  if (c == '.' || isdigit (c))
    {
      ungetc (c, stdin);
      scanf ("%lf", &yylval.val);
      return NUM;
    }

  /* Char starts an identifier => read the name.       */
  if (isalpha (c))
    {
      symrec *s;
      static char *symbuf = 0;
      static int length = 0;
      int i;

      /* Initially make the buffer long enough
         for a 40-character symbol name.  */
      if (length == 0)
        length = 40, symbuf = (char *)malloc (length + 1);

      i = 0;
      do
        {
          /* If buffer is full, make it bigger.        */
          if (i == length)
            {
              length *= 2;
              symbuf = (char *)realloc (symbuf, length + 1);
            }
          /* Add this character to the buffer.         */
          symbuf[i++] = c;
          /* Get another character.                    */
          c = getchar ();
        }
      while (c != EOF && isalnum (c));

      ungetc (c, stdin);
      symbuf[i] = '\0';

      s = getsym (symbuf);
      if (s == 0)
        s = putsym (symbuf, VAR);
      yylval.tptr = s;
      return s->type;
    }

  /* Any other character is a token by itself.        */
  return c;
}

This program is both powerful and flexible. You may easily add new functions, and it is a simple job to modify this code to install predefined variables such as pi or e as well.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.5 Exercises

  1. Add some new functions from `math.h' to the initialization list.

  2. Add another array that contains constants and their values. Then modify init_table to add these constants to the symbol table. It will be easiest to give the constants type VAR.

  3. Make the program report an error if the user refers to an uninitialized variable in any way except to store a value in it.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_14.html0000644000175000017500000002376412633316117020407 0ustar frankfrank Bison 2.21.5: Glossary
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

B. Glossary

Backus-Naur Form (BNF)
Formal method of specifying context-free grammars. BNF was first used in the ALGOL-60 report, 1963. See section Languages and Context-Free Grammars.

Context-free grammars
Grammars specified as rules that can be applied regardless of context. Thus, if there is a rule which says that an integer can be used as an expression, integers are allowed anywhere an expression is permitted. See section Languages and Context-Free Grammars.

Dynamic allocation
Allocation of memory that occurs during execution, rather than at compile time or on entry to a function.

Empty string
Analogous to the empty set in set theory, the empty string is a character string of length zero.

Finite-state stack machine
A "machine" that has discrete states in which it is said to exist at each instant in time. As input to the machine is processed, the machine moves from state to state as specified by the logic of the machine. In the case of the parser, the input is the language being parsed, and the states correspond to various stages in the grammar rules. See section The Bison Parser Algorithm .

Grouping
A language construct that is (in general) grammatically divisible; for example, `expression' or `declaration' in C. See section Languages and Context-Free Grammars.

Infix operator
An arithmetic operator that is placed between the operands on which it performs some operation.

Input stream
A continuous flow of data between devices or programs.

Language construct
One of the typical usage schemas of the language. For example, one of the constructs of the C language is the if statement. See section Languages and Context-Free Grammars.

Left associativity
Operators having left associativity are analyzed from left to right: `a+b+c' first computes `a+b' and then combines with `c'. See section Operator Precedence.

Left recursion
A rule whose result symbol is also its first component symbol; for example, `expseq1 : expseq1 ',' exp;'. See section Recursive Rules.

Left-to-right parsing
Parsing a sentence of a language by analyzing it token by token from left to right. See section The Bison Parser Algorithm .

Lexical analyzer (scanner)
A function that reads an input stream and returns tokens one by one. See section The Lexical Analyzer Function yylex.

Lexical tie-in
A flag, set by actions in the grammar rules, which alters the way tokens are parsed. See section 7.2 Lexical Tie-ins.

Literal string token
A token which constists of two or more fixed characters. See section 3.2 Symbols, Terminal and Nonterminal.

Look-ahead token
A token already read but not yet shifted. See section Look-Ahead Tokens.

LALR(1)
The class of context-free grammars that Bison (like most other parser generators) can handle; a subset of LR(1). See section Mysterious Reduce/Reduce Conflicts.

LR(1)
The class of context-free grammars in which at most one token of look-ahead is needed to disambiguate the parsing of any piece of input.

Nonterminal symbol
A grammar symbol standing for a grammatical construct that can be expressed through rules in terms of smaller constructs; in other words, a construct that is not a token. See section 3.2 Symbols, Terminal and Nonterminal.

Parse error
An error encountered during parsing of an input stream due to invalid syntax. See section 6. Error Recovery.

Parser
A function that recognizes valid sentences of a language by analyzing the syntax structure of a set of tokens passed to it from a lexical analyzer.

Postfix operator
An arithmetic operator that is placed after the operands upon which it performs some operation.

Reduction
Replacing a string of nonterminals and/or terminals with a single nonterminal, according to a grammar rule. See section The Bison Parser Algorithm .

Reentrant
A reentrant subprogram is a subprogram which can be in invoked any number of times in parallel, without interference between the various invocations. See section A Pure (Reentrant) Parser.

Reverse polish notation
A language in which all operators are postfix operators.

Right recursion
A rule whose result symbol is also its last component symbol; for example, `expseq1: exp ',' expseq1;'. See section Recursive Rules.

Semantics
In computer languages, the semantics are specified by the actions taken for each instance of the language, i.e., the meaning of each statement. See section Defining Language Semantics.

Shift
A parser is said to shift when it makes the choice of analyzing further input from the stream rather than reducing immediately some already-recognized rule. See section The Bison Parser Algorithm .

Single-character literal
A single character that is recognized and interpreted as is. See section From Formal Rules to Bison Input.

Start symbol
The nonterminal symbol that stands for a complete valid utterance in the language being parsed. The start symbol is usually listed as the first nonterminal symbol in a language specification. See section The Start-Symbol.

Symbol table
A data structure where symbol names and associated data are stored during parsing to allow for recognition and use of existing information in repeated uses of a symbol. See section 2.4 Multi-Function Calculator: mfcalc.

Token
A basic, grammatically indivisible unit of a language. The symbol that describes a token in the grammar is a terminal symbol. The input of the Bison parser is a stream of tokens which comes from the lexical analyzer. See section 3.2 Symbols, Terminal and Nonterminal.

Terminal symbol
A grammar symbol that has no rules in the grammar and therefore is grammatically indivisible. The piece of text it represents is a token. See section Languages and Context-Free Grammars.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_8.html0000644000175000017500000012575412633316117020334 0ustar frankfrank Bison 2.21.5: Algorithm
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5. The Bison Parser Algorithm

As Bison reads tokens, it pushes them onto a stack along with their semantic values. The stack is called the parser stack. Pushing a token is traditionally called shifting.

For example, suppose the infix calculator has read `1 + 5 *', with a `3' to come. The stack will have four elements, one for each token that was shifted.

But the stack does not always have an element for each token read. When the last n tokens and groupings shifted match the components of a grammar rule, they can be combined according to that rule. This is called reduction. Those tokens and groupings are replaced on the stack by a single grouping whose symbol is the result (left hand side) of that rule. Running the rule's action is part of the process of reduction, because this is what computes the semantic value of the resulting grouping.

For example, if the infix calculator's parser stack contains this:

 
1 + 5 * 3

and the next input token is a newline character, then the last three elements can be reduced to 15 via the rule:

 
expr: expr '*' expr;

Then the stack contains just these three elements:

 
1 + 15

At this point, another reduction can be made, resulting in the single value 16. Then the newline token can be shifted.

The parser tries, by shifts and reductions, to reduce the entire input down to a single grouping whose symbol is the grammar's start-symbol (see section Languages and Context-Free Grammars).

This kind of parser is known in the literature as a bottom-up parser.

5.1 Look-Ahead Tokens  Parser looks one token ahead when deciding what to do.
5.2 Shift/Reduce Conflicts  Conflicts: when either shifting or reduction is valid.
5.3 Operator Precedence  Operator precedence works by resolving conflicts.
5.4 Context-Dependent Precedence  When an operator's precedence depends on context.
5.5 Parser States  The parser is a finite-state-machine with stack.
5.6 Reduce/Reduce Conflicts  When two rules are applicable in the same situation.
5.7 Mysterious Reduce/Reduce Conflicts  Reduce/reduce conflicts that look unjustified.
5.8 Stack Overflow, and How to Avoid It  What happens when stack gets full. How to avoid it.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.1 Look-Ahead Tokens

The Bison parser does not always reduce immediately as soon as the last n tokens and groupings match a rule. This is because such a simple strategy is inadequate to handle most languages. Instead, when a reduction is possible, the parser sometimes "looks ahead" at the next token in order to decide what to do.

When a token is read, it is not immediately shifted; first it becomes the look-ahead token, which is not on the stack. Now the parser can perform one or more reductions of tokens and groupings on the stack, while the look-ahead token remains off to the side. When no more reductions should take place, the look-ahead token is shifted onto the stack. This does not mean that all possible reductions have been done; depending on the token type of the look-ahead token, some rules may choose to delay their application.

Here is a simple case where look-ahead is needed. These three rules define expressions which contain binary addition operators and postfix unary factorial operators (`!'), and allow parentheses for grouping.

 
expr:     term '+' expr
        | term
        ;

term:     '(' expr ')'
        | term '!'
        | NUMBER
        ;

Suppose that the tokens `1 + 2' have been read and shifted; what should be done? If the following token is `)', then the first three tokens must be reduced to form an expr. This is the only valid course, because shifting the `)' would produce a sequence of symbols term ')', and no rule allows this.

If the following token is `!', then it must be shifted immediately so that `2 !' can be reduced to make a term. If instead the parser were to reduce before shifting, `1 + 2' would become an expr. It would then be impossible to shift the `!' because doing so would produce on the stack the sequence of symbols expr '!'. No rule allows that sequence.

The current look-ahead token is stored in the variable yychar. See section Special Features for Use in Actions.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.2 Shift/Reduce Conflicts

Suppose we are parsing a language which has if-then and if-then-else statements, with a pair of rules like this:

 
if_stmt:
          IF expr THEN stmt
        | IF expr THEN stmt ELSE stmt
        ;

Here we assume that IF, THEN and ELSE are terminal symbols for specific keyword tokens.

When the ELSE token is read and becomes the look-ahead token, the contents of the stack (assuming the input is valid) are just right for reduction by the first rule. But it is also legitimate to shift the ELSE, because that would lead to eventual reduction by the second rule.

This situation, where either a shift or a reduction would be valid, is called a shift/reduce conflict. Bison is designed to resolve these conflicts by choosing to shift, unless otherwise directed by operator precedence declarations. To see the reason for this, let's contrast it with the other alternative.

Since the parser prefers to shift the ELSE, the result is to attach the else-clause to the innermost if-statement, making these two inputs equivalent:

 
if x then if y then win (); else lose;

if x then do; if y then win (); else lose; end;

But if the parser chose to reduce when possible rather than shift, the result would be to attach the else-clause to the outermost if-statement, making these two inputs equivalent:

 
if x then if y then win (); else lose;

if x then do; if y then win (); end; else lose;

The conflict exists because the grammar as written is ambiguous: either parsing of the simple nested if-statement is legitimate. The established convention is that these ambiguities are resolved by attaching the else-clause to the innermost if-statement; this is what Bison accomplishes by choosing to shift rather than reduce. (It would ideally be cleaner to write an unambiguous grammar, but that is very hard to do in this case.) This particular ambiguity was first encountered in the specifications of Algol 60 and is called the "dangling else" ambiguity.

To avoid warnings from Bison about predictable, legitimate shift/reduce conflicts, use the %expect n declaration. There will be no warning as long as the number of shift/reduce conflicts is exactly n. See section Suppressing Conflict Warnings.

The definition of if_stmt above is solely to blame for the conflict, but the conflict does not actually appear without additional rules. Here is a complete Bison input file that actually manifests the conflict:

 
%token IF THEN ELSE variable
%%
stmt:     expr
        | if_stmt
        ;

if_stmt:
          IF expr THEN stmt
        | IF expr THEN stmt ELSE stmt
        ;

expr:     variable
        ;


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3 Operator Precedence

Another situation where shift/reduce conflicts appear is in arithmetic expressions. Here shifting is not always the preferred resolution; the Bison declarations for operator precedence allow you to specify when to shift and when to reduce.

5.3.1 When Precedence is Needed  An example showing why precedence is needed.
5.3.2 Specifying Operator Precedence  How to specify precedence in Bison grammars.
5.3.3 Precedence Examples  How these features are used in the previous example.
5.3.4 How Precedence Works  How they work.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.1 When Precedence is Needed

Consider the following ambiguous grammar fragment (ambiguous because the input `1 - 2 * 3' can be parsed in two different ways):

 
expr:     expr '-' expr
        | expr '*' expr
        | expr '<' expr
        | '(' expr ')'
        ...
        ;

Suppose the parser has seen the tokens `1', `-' and `2'; should it reduce them via the rule for the addition operator? It depends on the next token. Of course, if the next token is `)', we must reduce; shifting is invalid because no single rule can reduce the token sequence `- 2 )' or anything starting with that. But if the next token is `*' or `<', we have a choice: either shifting or reduction would allow the parse to complete, but with different results.

To decide which one Bison should do, we must consider the results. If the next operator token op is shifted, then it must be reduced first in order to permit another opportunity to reduce the sum. The result is (in effect) `1 - (2 op 3)'. On the other hand, if the subtraction is reduced before shifting op, the result is `(1 - 2) op 3'. Clearly, then, the choice of shift or reduce should depend on the relative precedence of the operators `-' and op: `*' should be shifted first, but not `<'.

What about input such as `1 - 2 - 5'; should this be `(1 - 2) - 5' or should it be `1 - (2 - 5)'? For most operators we prefer the former, which is called left association. The latter alternative, right association, is desirable for assignment operators. The choice of left or right association is a matter of whether the parser chooses to shift or reduce when the stack contains `1 - 2' and the look-ahead token is `-': shifting makes right-associativity.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.2 Specifying Operator Precedence

Bison allows you to specify these choices with the operator precedence declarations %left and %right. Each such declaration contains a list of tokens, which are operators whose precedence and associativity is being declared. The %left declaration makes all those operators left-associative and the %right declaration makes them right-associative. A third alternative is %nonassoc, which declares that it is a syntax error to find the same operator twice "in a row".

The relative precedence of different operators is controlled by the order in which they are declared. The first %left or %right declaration in the file declares the operators whose precedence is lowest, the next such declaration declares the operators whose precedence is a little higher, and so on.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.3 Precedence Examples

In our example, we would want the following declarations:

 
%left '<'
%left '-'
%left '*'

In a more complete example, which supports other operators as well, we would declare them in groups of equal precedence. For example, '+' is declared with '-':

 
%left '<' '>' '=' NE LE GE
%left '+' '-'
%left '*' '/'

(Here NE and so on stand for the operators for "not equal" and so on. We assume that these tokens are more than one character long and therefore are represented by names, not character literals.)


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.4 How Precedence Works

The first effect of the precedence declarations is to assign precedence levels to the terminal symbols declared. The second effect is to assign precedence levels to certain rules: each rule gets its precedence from the last terminal symbol mentioned in the components. (You can also specify explicitly the precedence of a rule. See section Context-Dependent Precedence.)

Finally, the resolution of conflicts works by comparing the precedence of the rule being considered with that of the look-ahead token. If the token's precedence is higher, the choice is to shift. If the rule's precedence is higher, the choice is to reduce. If they have equal precedence, the choice is made based on the associativity of that precedence level. The verbose output file made by `-v' (see section Invoking Bison) says how each conflict was resolved.

Not all rules and not all tokens have precedence. If either the rule or the look-ahead token has no precedence, then the default is to shift.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.4 Context-Dependent Precedence

Often the precedence of an operator depends on the context. This sounds outlandish at first, but it is really very common. For example, a minus sign typically has a very high precedence as a unary operator, and a somewhat lower precedence (lower than multiplication) as a binary operator.

The Bison precedence declarations, %left, %right and %nonassoc, can only be used once for a given token; so a token has only one precedence declared in this way. For context-dependent precedence, you need to use an additional mechanism: the %prec modifier for rules.

The %prec modifier declares the precedence of a particular rule by specifying a terminal symbol whose precedence should be used for that rule. It's not necessary for that symbol to appear otherwise in the rule. The modifier's syntax is:

 
%prec terminal-symbol

and it is written after the components of the rule. Its effect is to assign the rule the precedence of terminal-symbol, overriding the precedence that would be deduced for it in the ordinary way. The altered rule precedence then affects how conflicts involving that rule are resolved (see section Operator Precedence).

Here is how %prec solves the problem of unary minus. First, declare a precedence for a fictitious terminal symbol named UMINUS. There are no tokens of this type, but the symbol serves to stand for its precedence:

 
...
%left '+' '-'
%left '*'
%left UMINUS

Now the precedence of UMINUS can be used in specific rules:

 
exp:    ...
        | exp '-' exp
        ...
        | '-' exp %prec UMINUS


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.5 Parser States

The function yyparse is implemented using a finite-state machine. The values pushed on the parser stack are not simply token type codes; they represent the entire sequence of terminal and nonterminal symbols at or near the top of the stack. The current state collects all the information about previous input which is relevant to deciding what to do next.

Each time a look-ahead token is read, the current parser state together with the type of look-ahead token are looked up in a table. This table entry can say, "Shift the look-ahead token." In this case, it also specifies the new parser state, which is pushed onto the top of the parser stack. Or it can say, "Reduce using rule number n." This means that a certain number of tokens or groupings are taken off the top of the stack, and replaced by one grouping. In other words, that number of states are popped from the stack, and one new state is pushed.

There is one other alternative: the table can say that the look-ahead token is erroneous in the current state. This causes error processing to begin (see section 6. Error Recovery).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.6 Reduce/Reduce Conflicts

A reduce/reduce conflict occurs if there are two or more rules that apply to the same sequence of input. This usually indicates a serious error in the grammar.

For example, here is an erroneous attempt to define a sequence of zero or more word groupings.

 
sequence: /* empty */
                { printf ("empty sequence\n"); }
        | maybeword
        | sequence word
                { printf ("added word %s\n", $2); }
        ;

maybeword: /* empty */
                { printf ("empty maybeword\n"); }
        | word
                { printf ("single word %s\n", $1); }
        ;

The error is an ambiguity: there is more than one way to parse a single word into a sequence. It could be reduced to a maybeword and then into a sequence via the second rule. Alternatively, nothing-at-all could be reduced into a sequence via the first rule, and this could be combined with the word using the third rule for sequence.

There is also more than one way to reduce nothing-at-all into a sequence. This can be done directly via the first rule, or indirectly via maybeword and then the second rule.

You might think that this is a distinction without a difference, because it does not change whether any particular input is valid or not. But it does affect which actions are run. One parsing order runs the second rule's action; the other runs the first rule's action and the third rule's action. In this example, the output of the program changes.

Bison resolves a reduce/reduce conflict by choosing to use the rule that appears first in the grammar, but it is very risky to rely on this. Every reduce/reduce conflict must be studied and usually eliminated. Here is the proper way to define sequence:

 
sequence: /* empty */
                { printf ("empty sequence\n"); }
        | sequence word
                { printf ("added word %s\n", $2); }
        ;

Here is another common error that yields a reduce/reduce conflict:

 
sequence: /* empty */
        | sequence words
        | sequence redirects
        ;

words:    /* empty */
        | words word
        ;

redirects:/* empty */
        | redirects redirect
        ;

The intention here is to define a sequence which can contain either word or redirect groupings. The individual definitions of sequence, words and redirects are error-free, but the three together make a subtle ambiguity: even an empty input can be parsed in infinitely many ways!

Consider: nothing-at-all could be a words. Or it could be two words in a row, or three, or any number. It could equally well be a redirects, or two, or any number. Or it could be a words followed by three redirects and another words. And so on.

Here are two ways to correct these rules. First, to make it a single level of sequence:

 
sequence: /* empty */
        | sequence word
        | sequence redirect
        ;

Second, to prevent either a words or a redirects from being empty:

 
sequence: /* empty */
        | sequence words
        | sequence redirects
        ;

words:    word
        | words word
        ;

redirects:redirect
        | redirects redirect
        ;


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.7 Mysterious Reduce/Reduce Conflicts

Sometimes reduce/reduce conflicts can occur that don't look warranted. Here is an example:

 
%token ID

%%
def:    param_spec return_spec ','
        ;
param_spec:
             type
        |    name_list ':' type
        ;
return_spec:
             type
        |    name ':' type
        ;
type:        ID
        ;
name:        ID
        ;
name_list:
             name
        |    name ',' name_list
        ;

It would seem that this grammar can be parsed with only a single token of look-ahead: when a param_spec is being read, an ID is a name if a comma or colon follows, or a type if another ID follows. In other words, this grammar is LR(1).

However, Bison, like most parser generators, cannot actually handle all LR(1) grammars. In this grammar, two contexts, that after an ID at the beginning of a param_spec and likewise at the beginning of a return_spec, are similar enough that Bison assumes they are the same. They appear similar because the same set of rules would be active--the rule for reducing to a name and that for reducing to a type. Bison is unable to determine at that stage of processing that the rules would require different look-ahead tokens in the two contexts, so it makes a single parser state for them both. Combining the two contexts causes a conflict later. In parser terminology, this occurrence means that the grammar is not LALR(1).

In general, it is better to fix deficiencies than to document them. But this particular deficiency is intrinsically hard to fix; parser generators that can handle LR(1) grammars are hard to write and tend to produce parsers that are very large. In practice, Bison is more useful as it is now.

When the problem arises, you can often fix it by identifying the two parser states that are being confused, and adding something to make them look distinct. In the above example, adding one rule to return_spec as follows makes the problem go away:

 
%token BOGUS
...
%%
...
return_spec:
             type
        |    name ':' type
        /* This rule is never used.  */
        |    ID BOGUS
        ;

This corrects the problem because it introduces the possibility of an additional active rule in the context after the ID at the beginning of return_spec. This rule is not active in the corresponding context in a param_spec, so the two contexts receive distinct parser states. As long as the token BOGUS is never generated by yylex, the added rule cannot alter the way actual input is parsed.

In this particular example, there is another way to solve the problem: rewrite the rule for return_spec to use ID directly instead of via name. This also causes the two confusing contexts to have different sets of active rules, because the one for return_spec activates the altered rule for return_spec rather than the one for name.

 
param_spec:
             type
        |    name_list ':' type
        ;
return_spec:
             type
        |    ID ':' type
        ;


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.8 Stack Overflow, and How to Avoid It

The Bison parser stack can overflow if too many tokens are shifted and not reduced. When this happens, the parser function yyparse returns a nonzero value, pausing only to call yyerror to report the overflow.

By defining the macro YYMAXDEPTH, you can control how deep the parser stack can become before a stack overflow occurs. Define the macro with a value that is an integer. This value is the maximum number of tokens that can be shifted (and not reduced) before overflow. It must be a constant expression whose value is known at compile time.

The stack space allowed is not necessarily allocated. If you specify a large value for YYMAXDEPTH, the parser actually allocates a small stack at first, and then makes it bigger by stages as needed. This increasing allocation happens automatically and silently. Therefore, you do not need to make YYMAXDEPTH painfully small merely to save space for ordinary inputs that do not need much stack.

The default value of YYMAXDEPTH, if you do not define it, is 10000.

You can control how much stack is allocated initially by defining the macro YYINITDEPTH. This value too must be a compile-time constant integer. The default is 200.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_abt.html0000644000175000017500000000765312633316117020730 0ustar frankfrank Bison 2.21.5: About this document
[Top] [Contents] [Index] [ ? ]

About this document

This document was generated by using texi2html

The buttons in the navigation panels have the following meaning:

Button Name Go to From 1.2.3 go to
[ < ] Back previous section in reading order 1.2.2
[ > ] Forward next section in reading order 1.2.4
[ << ] FastBack beginning of this chapter or previous chapter 1
[ Up ] Up up section 1.2
[ >> ] FastForward next chapter 2
[Top] Top cover (top) of document  
[Contents] Contents table of contents  
[Index] Index concept index  
[ ? ] About this page  

where the Example assumes that the current position is at Subsubsection One-Two-Three of a document of the following structure:

  • 1. Section One
    • 1.1 Subsection One-One
      • ...
    • 1.2 Subsection One-Two
      • 1.2.1 Subsubsection One-Two-One
      • 1.2.2 Subsubsection One-Two-Two
      • 1.2.3 Subsubsection One-Two-Three     <== Current Position
      • 1.2.4 Subsubsection One-Two-Four
    • 1.3 Subsection One-Three
      • ...
    • 1.4 Subsection One-Four


This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_7.html0000644000175000017500000007743212633316117020332 0ustar frankfrank Bison 2.21.5: Interface
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4. Parser C-Language Interface

The Bison parser is actually a C function named yyparse. Here we describe the interface conventions of yyparse and the other functions that it needs to use.

Keep in mind that the parser uses many C identifiers starting with `yy' and `YY' for internal purposes. If you use such an identifier (aside from those in this manual) in an action or in additional C code in the grammar file, you are likely to run into trouble.

4.1 The Parser Function yyparse  How to call yyparse and what it returns.
4.2 The Lexical Analyzer Function yylex  You must supply a function yylex which reads tokens.
4.3 The Error Reporting Function yyerror  You must supply a function yyerror.
4.4 Special Features for Use in Actions  Special features for use in actions.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.1 The Parser Function yyparse

You call the function yyparse to cause parsing to occur. This function reads tokens, executes actions, and ultimately returns when it encounters end-of-input or an unrecoverable syntax error. You can also write an action which directs yyparse to return immediately without reading further.

The value returned by yyparse is 0 if parsing was successful (return is due to end-of-input).

The value is 1 if parsing failed (return is due to a syntax error).

In an action, you can cause immediate return from yyparse by using these macros:

YYACCEPT
Return immediately with value 0 (to report success).

YYABORT
Return immediately with value 1 (to report failure).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.2 The Lexical Analyzer Function yylex

The lexical analyzer function, yylex, recognizes tokens from the input stream and returns them to the parser. Bison does not create this function automatically; you must write it so that yyparse can call it. The function is sometimes referred to as a lexical scanner.

In simple programs, yylex is often defined at the end of the Bison grammar file. If yylex is defined in a separate source file, you need to arrange for the token-type macro definitions to be available there. To do this, use the `-d' option when you run Bison, so that it will write these macro definitions into a separate header file `name.tab.h' which you can include in the other source files that need it. See section Invoking Bison.

4.2.1 Calling Convention for yylex  How yyparse calls yylex.
4.2.2 Semantic Values of Tokens  How yylex must return the semantic value of the token it has read.
4.2.3 Textual Positions of Tokens  How yylex must return the text position
(line number, etc.) of the token, if the
actions want that.
4.2.4 Calling Conventions for Pure Parsers  How the calling convention differs in a pure parser (@pxref{Pure Decl, ,A Pure (Reentrant) Parser}).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.2.1 Calling Convention for yylex

The value that yylex returns must be the numeric code for the type of token it has just found, or 0 for end-of-input.

When a token is referred to in the grammar rules by a name, that name in the parser file becomes a C macro whose definition is the proper numeric code for that token type. So yylex can use the name to indicate that type. See section 3.2 Symbols, Terminal and Nonterminal.

When a token is referred to in the grammar rules by a character literal, the numeric code for that character is also the code for the token type. So yylex can simply return that character code. The null character must not be used this way, because its code is zero and that is what signifies end-of-input.

Here is an example showing these things:

 
yylex ()
{
  ...
  if (c == EOF)     /* Detect end of file. */
    return 0;
  ...
  if (c == '+' || c == '-')
    return c;      /* Assume token type for `+' is '+'. */
  ...
  return INT;      /* Return the type of the token. */
  ...
}

This interface has been designed so that the output from the lex utility can be used without change as the definition of yylex.

If the grammar uses literal string tokens, there are two ways that yylex can determine the token type codes for them:

  • If the grammar defines symbolic token names as aliases for the literal string tokens, yylex can use these symbolic names like all others. In this case, the use of the literal string tokens in the grammar file has no effect on yylex.

  • yylex can find the multi-character token in the yytname table. The index of the token in the table is the token type's code. The name of a multi-character token is recorded in yytname with a double-quote, the token's characters, and another double-quote. The token's characters are not escaped in any way; they appear verbatim in the contents of the string in the table.

    Here's code for looking up a token in yytname, assuming that the characters of the token are stored in token_buffer.

     
    for (i = 0; i < YYNTOKENS; i++)
      {
        if (yytname[i] != 0
            && yytname[i][0] == '"'
            && strncmp (yytname[i] + 1, token_buffer,
                        strlen (token_buffer))
            && yytname[i][strlen (token_buffer) + 1] == '"'
            && yytname[i][strlen (token_buffer) + 2] == 0)
          break;
      }
    

    The yytname table is generated only if you use the %token_table declaration. See section 3.6.8 Bison Declaration Summary.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.2.2 Semantic Values of Tokens

In an ordinary (nonreentrant) parser, the semantic value of the token must be stored into the global variable yylval. When you are using just one data type for semantic values, yylval has that type. Thus, if the type is int (the default), you might write this in yylex:

 
  ...
  yylval = value;  /* Put value onto Bison stack. */
  return INT;      /* Return the type of the token. */
  ...

When you are using multiple data types, yylval's type is a union made from the %union declaration (see section The Collection of Value Types). So when you store a token's value, you must use the proper member of the union. If the %union declaration looks like this:

 
%union {
  int intval;
  double val;
  symrec *tptr;
}

then the code in yylex might look like this:

 
  ...
  yylval.intval = value; /* Put value onto Bison stack. */
  return INT;          /* Return the type of the token. */
  ...


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.2.3 Textual Positions of Tokens

If you are using the `@n'-feature (see section Special Features for Use in Actions) in actions to keep track of the textual locations of tokens and groupings, then you must provide this information in yylex. The function yyparse expects to find the textual location of a token just parsed in the global variable yylloc. So yylex must store the proper data in that variable. The value of yylloc is a structure and you need only initialize the members that are going to be used by the actions. The four members are called first_line, first_column, last_line and last_column. Note that the use of this feature makes the parser noticeably slower.

The data type of yylloc has the name YYLTYPE.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.2.4 Calling Conventions for Pure Parsers

When you use the Bison declaration %pure_parser to request a pure, reentrant parser, the global communication variables yylval and yylloc cannot be used. (See section A Pure (Reentrant) Parser.) In such parsers the two global variables are replaced by pointers passed as arguments to yylex. You must declare them as shown here, and pass the information back by storing it through those pointers.

 
yylex (lvalp, llocp)
     YYSTYPE *lvalp;
     YYLTYPE *llocp;
{
  ...
  *lvalp = value;  /* Put value onto Bison stack.  */
  return INT;      /* Return the type of the token.  */
  ...
}

If the grammar file does not use the `@' constructs to refer to textual positions, then the type YYLTYPE will not be defined. In this case, omit the second argument; yylex will be called with only one argument.

If you use a reentrant parser, you can optionally pass additional parameter information to it in a reentrant way. To do so, define the macro YYPARSE_PARAM as a variable name. This modifies the yyparse function to accept one argument, of type void *, with that name.

When you call yyparse, pass the address of an object, casting the address to void *. The grammar actions can refer to the contents of the object by casting the pointer value back to its proper type and then dereferencing it. Here's an example. Write this in the parser:

 
%{
struct parser_control
{
  int nastiness;
  int randomness;
};

#define YYPARSE_PARAM parm
%}

Then call the parser like this:

 
struct parser_control
{
  int nastiness;
  int randomness;
};

...

{
  struct parser_control foo;
  ...  /* Store proper data in foo.  */
  value = yyparse ((void *) &foo);
  ...
}

In the grammar actions, use expressions like this to refer to the data:

 
((struct parser_control *) parm)->randomness

If you wish to pass the additional parameter data to yylex, define the macro YYLEX_PARAM just like YYPARSE_PARAM, as shown here:

 
%{
struct parser_control
{
  int nastiness;
  int randomness;
};

#define YYPARSE_PARAM parm
#define YYLEX_PARAM parm
%}

You should then define yylex to accept one additional argument--the value of parm. (This makes either two or three arguments in total, depending on whether an argument of type YYLTYPE is passed.) You can declare the argument as a pointer to the proper object type, or you can declare it as void * and access the contents as shown above.

You can use `%pure_parser' to request a reentrant parser without also using YYPARSE_PARAM. Then you should call yyparse with no arguments, as usual.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.3 The Error Reporting Function yyerror

The Bison parser detects a parse error or syntax error whenever it reads a token which cannot satisfy any syntax rule. A action in the grammar can also explicitly proclaim an error, using the macro YYERROR (see section Special Features for Use in Actions).

The Bison parser expects to report the error by calling an error reporting function named yyerror, which you must supply. It is called by yyparse whenever a syntax error is found, and it receives one argument. For a parse error, the string is normally "parse error".

If you define the macro YYERROR_VERBOSE in the Bison declarations section (see section The Bison Declarations Section), then Bison provides a more verbose and specific error message string instead of just plain "parse error". It doesn't matter what definition you use for YYERROR_VERBOSE, just whether you define it.

The parser can detect one other kind of error: stack overflow. This happens when the input contains constructions that are very deeply nested. It isn't likely you will encounter this, since the Bison parser extends its stack automatically up to a very large limit. But if overflow happens, yyparse calls yyerror in the usual fashion, except that the argument string is "parser stack overflow".

The following definition suffices in simple programs:

 
yyerror (s)
     char *s;
{
  fprintf (stderr, "%s\n", s);
}

After yyerror returns to yyparse, the latter will attempt error recovery if you have written suitable error recovery grammar rules (see section 6. Error Recovery). If recovery is impossible, yyparse will immediately return 1.

The variable yynerrs contains the number of syntax errors encountered so far. Normally this variable is global; but if you request a pure parser (see section A Pure (Reentrant) Parser) then it is a local variable which only the actions can access.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.4 Special Features for Use in Actions

Here is a table of Bison constructs, variables and macros that are useful in actions.

`$$'
Acts like a variable that contains the semantic value for the grouping made by the current rule. See section 3.5.3 Actions.

`$n'
Acts like a variable that contains the semantic value for the nth component of the current rule. See section 3.5.3 Actions.

`$<typealt>$'
Like $$ but specifies alternative typealt in the union specified by the %union declaration. See section Data Types of Values in Actions.

`$<typealt>n'
Like $n but specifies alternative typealt in the union specified by the %union declaration. See section Data Types of Values in Actions.

`YYABORT;'
Return immediately from yyparse, indicating failure. See section The Parser Function yyparse.

`YYACCEPT;'
Return immediately from yyparse, indicating success. See section The Parser Function yyparse.

`YYBACKUP (token, value);'
Unshift a token. This macro is allowed only for rules that reduce a single value, and only when there is no look-ahead token. It installs a look-ahead token with token type token and semantic value value; then it discards the value that was going to be reduced by this rule.

If the macro is used when it is not valid, such as when there is a look-ahead token already, then it reports a syntax error with a message `cannot back up' and performs ordinary error recovery.

In either case, the rest of the action is not executed.

`YYEMPTY'
Value stored in yychar when there is no look-ahead token.

`YYERROR;'
Cause an immediate syntax error. This statement initiates error recovery just as if the parser itself had detected an error; however, it does not call yyerror, and does not print any message. If you want to print an error message, call yyerror explicitly before the `YYERROR;' statement. See section 6. Error Recovery.

`YYRECOVERING'
This macro stands for an expression that has the value 1 when the parser is recovering from a syntax error, and 0 the rest of the time. See section 6. Error Recovery.

`yychar'
Variable containing the current look-ahead token. (In a pure parser, this is actually a local variable within yyparse.) When there is no look-ahead token, the value YYEMPTY is stored in the variable. See section Look-Ahead Tokens.

`yyclearin;'
Discard the current look-ahead token. This is useful primarily in error rules. See section 6. Error Recovery.

`yyerrok;'
Resume generating error messages immediately for subsequent syntax errors. This is useful primarily in error rules. See section 6. Error Recovery.

`@n'
Acts like a structure variable containing information on the line numbers and column numbers of the nth component of the current rule. The structure has four members, like this:

 
struct {
  int first_line, last_line;
  int first_column, last_column;
};

Thus, to get the starting line number of the third component, you would use `@3.first_line'.

In order for the members of this structure to contain valid information, you must make yylex supply this information about each token. If you need only certain members, then yylex need only fill in those members.

The use of this feature makes the parser noticeably slower.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_2.html0000644000175000017500000001047512633316117020317 0ustar frankfrank Bison 2.21.5: Conditions
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Conditions for Using Bison

As of Bison version 1.24, we have changed the distribution terms for yyparse to permit using Bison's output in non-free programs. Formerly, Bison parsers could be used only in programs that were free software.

The other GNU programming tools, such as the GNU C compiler, have never had such a requirement. They could always be used for non-free software. The reason Bison was different was not due to a special policy decision; it resulted from applying the usual General Public License to all of the Bison source code.

The output of the Bison utility--the Bison parser file--contains a verbatim copy of a sizable piece of Bison, which is the code for the yyparse function. (The actions from your grammar are inserted into this function at one point, but the rest of the function is not changed.) When we applied the GPL terms to the code for yyparse, the effect was to restrict the use of Bison output to free software.

We didn't change the terms because of sympathy for people who want to make software proprietary. Software should be free. But we concluded that limiting Bison's use to free software was doing little to encourage people to make other software free. So we decided to make the practical conditions for using Bison match the practical conditions for using the other GNU tools.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_11.html0000644000175000017500000001515412633316117020376 0ustar frankfrank Bison 2.21.5: Debugging
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8. Debugging Your Parser

If a Bison grammar compiles properly but doesn't do what you want when it runs, the yydebug parser-trace feature can help you figure out why.

To enable compilation of trace facilities, you must define the macro YYDEBUG when you compile the parser. You could use `-DYYDEBUG=1' as a compiler option or you could put `#define YYDEBUG 1' in the C declarations section of the grammar file (see section The C Declarations Section). Alternatively, use the `-t' option when you run Bison (see section Invoking Bison). We always define YYDEBUG so that debugging is always possible.

The trace facility uses stderr, so you must add #include <stdio.h> to the C declarations section unless it is already there.

Once you have compiled the program with trace facilities, the way to request a trace is to store a nonzero value in the variable yydebug. You can do this by making the C code do it (in main, perhaps), or you can alter the value with a C debugger.

Each step taken by the parser when yydebug is nonzero produces a line or two of trace information, written on stderr. The trace messages tell you these things:

  • Each time the parser calls yylex, what kind of token was read.

  • Each time a token is shifted, the depth and complete contents of the state stack (see section 5.5 Parser States).

  • Each time a rule is reduced, which rule it is, and the complete contents of the state stack afterward.

To make sense of this information, it helps to refer to the listing file produced by the Bison `-v' option (see section Invoking Bison). This file shows the meaning of each state in terms of positions in various rules, and also what each state will do with each possible input token. As you read the successive trace messages, you can see that the parser is functioning according to its specification in the listing file. Eventually you will arrive at the place where something undesirable happens, and you will see which parts of the grammar are to blame.

The parser file is a C program and you can use C debuggers on it, but it's not easy to interpret what it is doing. The parser function is a finite-state machine interpreter, and aside from the actions it executes the same code over and over. Only the values of variables show where in the grammar it is working.

The debugging information normally gives the token type of each token read, but not its semantic value. You can optionally define a macro named YYPRINT to provide a way to print the value. If you define YYPRINT, it should take three arguments. The parser will pass a standard I/O stream, the numeric code for the token type, and the token value (from yylval).

Here is an example of YYPRINT suitable for the multi-function calculator (see section Declarations for mfcalc):

 
#define YYPRINT(file, type, value)   yyprint (file, type, value)

static void
yyprint (file, type, value)
     FILE *file;
     int type;
     YYSTYPE value;
{
  if (type == VAR)
    fprintf (file, " %s", value.tptr->name);
  else if (type == NUM)
    fprintf (file, " %d", value.val);
}


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison.html0000644000175000017500000006012612633316117020074 0ustar frankfrank Bison 2.21.5: Bison 2.21.5
[Top] [Contents] [Index] [ ? ]

Bison 2.21.5

This manual documents version 2.21.5 of Bison.

Introduction  
Conditions for Using Bison  
GNU GENERAL PUBLIC LICENSE  The GNU General Public License says how you can copy and share Bison
Tutorial sections:
1. The Concepts of Bison  Basic concepts for understanding Bison.
2. Examples  Three simple explained examples of using Bison.
Reference sections:
3. Bison Grammar Files  Writing Bison declarations and rules.
4. Parser C-Language Interface  C-language interface to the parser function yyparse.
5. The Bison Parser Algorithm  How the Bison parser works at run-time.
6. Error Recovery  Writing rules for error recovery.
7. Handling Context Dependencies  What to do if your language syntax is too messy for Bison to handle straightforwardly.
8. Debugging Your Parser  Debugging Bison parsers that parse wrong.
9. Invoking Bison  How to run Bison (to produce the parser source file).
A. Bison Symbols  All the keywords of the Bison language are explained.
B. Glossary  Basic concepts are explained.
Index  Cross-references to the text.
-- The Detailed Node Listing ---
The Concepts of Bison
1.1 Languages and Context-Free Grammars  Languages and context-free grammars, as mathematical ideas.
1.2 From Formal Rules to Bison Input  How we represent grammars for Bison's sake.
1.3 Semantic Values  Each token or syntactic grouping can have a semantic value (the value of an integer, the name of an identifier, etc.).
1.4 Semantic Actions  Each rule can have an action containing C code.
1.5 Bison Output: the Parser File  What are Bison's input and output, how is the output used?
1.6 Stages in Using Bison  Stages in writing and running Bison grammars.
1.7 The Overall Layout of a Bison Grammar  Overall structure of a Bison grammar file.
Examples
2.1 Reverse Polish Notation Calculator  Reverse polish notation calculator; a first example with no operator precedence.
2.2 Infix Notation Calculator: calc  Infix (algebraic) notation calculator. Operator precedence is introduced.
2.3 Simple Error Recovery  Continuing after syntax errors.
2.4 Multi-Function Calculator: mfcalc  Calculator with memory and trig functions. It uses multiple data-types for semantic values.
2.5 Exercises  Ideas for improving the multi-function calculator.
Reverse Polish Notation Calculator
2.1.1 Declarations for rpcalc  Bison and C declarations for rpcalc.
2.1.2 Grammar Rules for rpcalc  Grammar Rules for rpcalc, with explanation.
2.1.3 The rpcalc Lexical Analyzer  The lexical analyzer.
2.1.4 The Controlling Function  The controlling function.
2.1.5 The Error Reporting Routine  The error reporting function.
2.1.6 Running Bison to Make the Parser  Running Bison on the grammar file.
2.1.7 Compiling the Parser File  Run the C compiler on the output code.
Grammar Rules for rpcalc
2.1.2.1 Explanation of input  
2.1.2.2 Explanation of line  
2.1.2.3 Explanation of expr  
Multi-Function Calculator: mfcalc
2.4.1 Declarations for mfcalc  Bison declarations for multi-function calculator.
2.4.2 Grammar Rules for mfcalc  Grammar rules for the calculator.
2.4.3 The mfcalc Symbol Table  Symbol table management subroutines.
Bison Grammar Files
3.1 Outline of a Bison Grammar  Overall layout of the grammar file.
3.2 Symbols, Terminal and Nonterminal  Terminal and nonterminal symbols.
3.3 Syntax of Grammar Rules  How to write grammar rules.
3.4 Recursive Rules  Writing recursive rules.
3.5 Defining Language Semantics  Semantic values and actions.
3.6 Bison Declarations  All kinds of Bison declarations are described here.
3.7 Multiple Parsers in the Same Program  Putting more than one Bison parser in one program.
Outline of a Bison Grammar
3.1.1 The C Declarations Section  Syntax and usage of the C declarations section.
3.1.2 The Bison Declarations Section  Syntax and usage of the Bison declarations section.
3.1.3 The Grammar Rules Section  Syntax and usage of the grammar rules section.
3.1.4 The Additional C Code Section  Syntax and usage of the additional C code section.
Defining Language Semantics
3.5.1 Data Types of Semantic Values  Specifying one data type for all semantic values.
3.5.2 More Than One Value Type  Specifying several alternative data types.
3.5.3 Actions  An action is the semantic definition of a grammar rule.
3.5.4 Data Types of Values in Actions  Specifying data types for actions to operate on.
3.5.5 Actions in Mid-Rule  Most actions go at the end of a rule. This says when, why and how to use the exceptional action in the middle of a rule.
Bison Declarations
3.6.1 Token Type Names  Declaring terminal symbols.
3.6.2 Operator Precedence  Declaring terminals with precedence and associativity.
3.6.3 The Collection of Value Types  Declaring the set of all semantic value types.
3.6.4 Nonterminal Symbols  Declaring the choice of type for a nonterminal symbol.
3.6.5 Suppressing Conflict Warnings  Suppressing warnings about shift/reduce conflicts.
3.6.6 The Start-Symbol  Specifying the start symbol.
3.6.7 A Pure (Reentrant) Parser  Requesting a reentrant parser.
3.6.8 Bison Declaration Summary  Table of all Bison declarations.
Parser C-Language Interface
4.1 The Parser Function yyparse  How to call yyparse and what it returns.
4.2 The Lexical Analyzer Function yylex  You must supply a function yylex which reads tokens.
4.3 The Error Reporting Function yyerror  You must supply a function yyerror.
4.4 Special Features for Use in Actions  Special features for use in actions.
The Lexical Analyzer Function yylex
4.2.1 Calling Convention for yylex  How yyparse calls yylex.
4.2.2 Semantic Values of Tokens  How yylex must return the semantic value of the token it has read.
4.2.3 Textual Positions of Tokens  How yylex must return the text position
(line number, etc.) of the token, if the
actions want that.
4.2.4 Calling Conventions for Pure Parsers  How the calling convention differs in a pure parser (@pxref{Pure Decl, ,A Pure (Reentrant) Parser}).
The Bison Parser Algorithm
5.1 Look-Ahead Tokens  Parser looks one token ahead when deciding what to do.
5.2 Shift/Reduce Conflicts  Conflicts: when either shifting or reduction is valid.
5.3 Operator Precedence  Operator precedence works by resolving conflicts.
5.4 Context-Dependent Precedence  When an operator's precedence depends on context.
5.5 Parser States  The parser is a finite-state-machine with stack.
5.6 Reduce/Reduce Conflicts  When two rules are applicable in the same situation.
5.7 Mysterious Reduce/Reduce Conflicts  Reduce/reduce conflicts that look unjustified.
5.8 Stack Overflow, and How to Avoid It  What happens when stack gets full. How to avoid it.
Operator Precedence
5.3.1 When Precedence is Needed  An example showing why precedence is needed.
5.3.2 Specifying Operator Precedence  How to specify precedence in Bison grammars.
5.3.3 Precedence Examples  How these features are used in the previous example.
5.3.4 How Precedence Works  How they work.
Handling Context Dependencies
7.1 Semantic Info in Token Types  Token parsing can depend on the semantic context.
7.2 Lexical Tie-ins  Token parsing can depend on the syntactic context.
7.3 Lexical Tie-ins and Error Recovery  Lexical tie-ins have implications for how error recovery rules must be written.
Invoking Bison
9.1 Bison Options  All the options described in detail, in alphabetical order by short options.
9.2 Option Cross Key  Alphabetical list of long options.
9.3 Invoking Bison under VMS  Bison command syntax on VMS.



This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_12.html0000644000175000017500000003440012633316117020372 0ustar frankfrank Bison 2.21.5: Invocation
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9. Invoking Bison

The usual way to invoke Bison is as follows:

 
bison infile

Here infile is the grammar file name, which usually ends in `.y'. The parser file's name is made by replacing the `.y' with `.tab.c'. Thus, the `bison foo.y' filename yields `foo.tab.c', and the `bison hack/foo.y' filename yields `hack/foo.tab.c'.

9.1 Bison Options  All the options described in detail, in alphabetical order by short options.
9.2 Option Cross Key  Alphabetical list of long options.
9.3 Invoking Bison under VMS  Bison command syntax on VMS.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.1 Bison Options

Bison supports both traditional single-letter options and mnemonic long option names. Long option names are indicated with `--' instead of `-'. Abbreviations for option names are allowed as long as they are unique. When a long option takes an argument, like `--file-prefix', connect the option name and the argument with `='.

Here is a list of options that can be used with Bison, alphabetized by short option. It is followed by a cross key alphabetized by long option.

`-b file-prefix'
`--file-prefix=prefix'
Specify a prefix to use for all Bison output file names. The names are chosen as if the input file were named `prefix.c'.

`-d'
`--defines'
Write an extra output file containing macro definitions for the token type names defined in the grammar and the semantic value type YYSTYPE, as well as a few extern variable declarations.

If the parser output file is named `name.c' then this file is named `name.h'.

This output file is essential if you wish to put the definition of yylex in a separate source file, because yylex needs to be able to refer to token type codes and the variable yylval. See section Semantic Values of Tokens.

`-l'
`--no-lines'
Don't put any #line preprocessor commands in the parser file. Ordinarily Bison puts them in the parser file so that the C compiler and debuggers will associate errors with your source file, the grammar file. This option causes them to associate errors with the parser file, treating it as an independent source file in its own right.

`-n'
`--no-parser'
Do not include any C code in the parser file; generate tables only. The parser file contains just #define directives and static variable declarations.

This option also tells Bison to write the C code for the grammar actions into a file named `filename.act', in the form of a brace-surrounded body fit for a switch statement.

`-o outfile'
`--output-file=outfile'
Specify the name outfile for the parser file.

The other output files' names are constructed from outfile as described under the `-v' and `-d' options.

`-p prefix'
`--name-prefix=prefix'
Rename the external symbols used in the parser so that they start with prefix instead of `yy'. The precise list of symbols renamed is yyparse, yylex, yyerror, yynerrs, yylval, yychar and yydebug.

For example, if you use `-p c', the names become cparse, clex, and so on.

See section Multiple Parsers in the Same Program.

`-r'
`--raw'
Pretend that %raw was specified. See section 3.6.8 Bison Declaration Summary.

`-t'
`--debug'
Output a definition of the macro YYDEBUG into the parser file, so that the debugging facilities are compiled. See section Debugging Your Parser.

`-v'
`--verbose'
Write an extra output file containing verbose descriptions of the parser states and what is done for each type of look-ahead token in that state.

This file also describes all the conflicts, both those resolved by operator precedence and the unresolved ones.

The file's name is made by removing `.tab.c' or `.c' from the parser output file name, and adding `.output' instead.

Therefore, if the input file is `foo.y', then the parser file is called `foo.tab.c' by default. As a consequence, the verbose output file is called `foo.output'.

`-V'
`--version'
Print the version number of Bison and exit.

`-h'
`--help'
Print a summary of the command-line options to Bison and exit.

`-y'
`--yacc'
`--fixed-output-files'
Equivalent to `-o y.tab.c'; the parser output file is called `y.tab.c', and the other outputs are called `y.output' and `y.tab.h'. The purpose of this option is to imitate Yacc's output file name conventions. Thus, the following shell script can substitute for Yacc:

 
bison -y $*


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.2 Option Cross Key

Here is a list of options, alphabetized by long option, to help you find the corresponding short option.

 
--debug                               -t
--defines                             -d
--file-prefix=prefix                  -b file-prefix
--fixed-output-files --yacc           -y
--help                                -h
--name-prefix=prefix                  -p name-prefix
--no-lines                            -l
--no-parser                           -n
--output-file=outfile                 -o outfile
--raw                                 -r			
--token-table                         -k
--verbose                             -v
--version                             -V


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

9.3 Invoking Bison under VMS

The command line syntax for Bison on VMS is a variant of the usual Bison command syntax--adapted to fit VMS conventions.

To find the VMS equivalent for any Bison option, start with the long option, and substitute a `/' for the leading `--', and substitute a `_' for each `-' in the name of the long option. For example, the following invocation under VMS:

 
bison /debug/name_prefix=bar foo.y

is equivalent to the following command under POSIX.

 
bison --debug --name-prefix=bar foo.y

The VMS file system does not permit filenames such as `foo.tab.c'. In the above example, the output file would instead be named `foo_tab.c'.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_4.html0000644000175000017500000007000212633316117020311 0ustar frankfrank Bison 2.21.5: Concepts
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1. The Concepts of Bison

This chapter introduces many of the basic concepts without which the details of Bison will not make sense. If you do not already know how to use Bison or Yacc, we suggest you start by reading this chapter carefully.

1.1 Languages and Context-Free Grammars  Languages and context-free grammars, as mathematical ideas.
1.2 From Formal Rules to Bison Input  How we represent grammars for Bison's sake.
1.3 Semantic Values  Each token or syntactic grouping can have a semantic value (the value of an integer, the name of an identifier, etc.).
1.4 Semantic Actions  Each rule can have an action containing C code.
1.5 Bison Output: the Parser File  What are Bison's input and output, how is the output used?
1.6 Stages in Using Bison  Stages in writing and running Bison grammars.
1.7 The Overall Layout of a Bison Grammar  Overall structure of a Bison grammar file.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.1 Languages and Context-Free Grammars

In order for Bison to parse a language, it must be described by a context-free grammar. This means that you specify one or more syntactic groupings and give rules for constructing them from their parts. For example, in the C language, one kind of grouping is called an `expression'. One rule for making an expression might be, "An expression can be made of a minus sign and another expression". Another would be, "An expression can be an integer". As you can see, rules are often recursive, but there must be at least one rule which leads out of the recursion.

The most common formal system for presenting such rules for humans to read is Backus-Naur Form or "BNF", which was developed in order to specify the language Algol 60. Any grammar expressed in BNF is a context-free grammar. The input to Bison is essentially machine-readable BNF.

Not all context-free languages can be handled by Bison, only those that are LALR(1). In brief, this means that it must be possible to tell how to parse any portion of an input string with just a single token of look-ahead. Strictly speaking, that is a description of an LR(1) grammar, and LALR(1) involves additional restrictions that are hard to explain simply; but it is rare in actual practice to find an LR(1) grammar that fails to be LALR(1). See section Mysterious Reduce/Reduce Conflicts, for more information on this.

In the formal grammatical rules for a language, each kind of syntactic unit or grouping is named by a symbol. Those which are built by grouping smaller constructs according to grammatical rules are called nonterminal symbols; those which can't be subdivided are called terminal symbols or token types. We call a piece of input corresponding to a single terminal symbol a token, and a piece corresponding to a single nonterminal symbol a grouping.

We can use the C language as an example of what symbols, terminal and nonterminal, mean. The tokens of C are identifiers, constants (numeric and string), and the various keywords, arithmetic operators and punctuation marks. So the terminal symbols of a grammar for C include `identifier', `number', `string', plus one symbol for each keyword, operator or punctuation mark: `if', `return', `const', `static', `int', `char', `plus-sign', `open-brace', `close-brace', `comma' and many more. (These tokens can be subdivided into characters, but that is a matter of lexicography, not grammar.)

Here is a simple C function subdivided into tokens:

 
int             /* keyword `int' */
square (x)      /* identifier, open-paren, */
                /* identifier, close-paren */
     int x;     /* keyword `int', identifier, semicolon */
{               /* open-brace */
  return x * x; /* keyword `return', identifier, */
                /* asterisk, identifier, semicolon */
}               /* close-brace */

The syntactic groupings of C include the expression, the statement, the declaration, and the function definition. These are represented in the grammar of C by nonterminal symbols `expression', `statement', `declaration' and `function definition'. The full grammar uses dozens of additional language constructs, each with its own nonterminal symbol, in order to express the meanings of these four. The example above is a function definition; it contains one declaration, and one statement. In the statement, each `x' is an expression and so is `x * x'.

Each nonterminal symbol must have grammatical rules showing how it is made out of simpler constructs. For example, one kind of C statement is the return statement; this would be described with a grammar rule which reads informally as follows:

A `statement' can be made of a `return' keyword, an `expression' and a `semicolon'.

There would be many other rules for `statement', one for each kind of statement in C.

One nonterminal symbol must be distinguished as the special one which defines a complete utterance in the language. It is called the start symbol. In a compiler, this means a complete input program. In the C language, the nonterminal symbol `sequence of definitions and declarations' plays this role.

For example, `1 + 2' is a valid C expression--a valid part of a C program--but it is not valid as an entire C program. In the context-free grammar of C, this follows from the fact that `expression' is not the start symbol.

The Bison parser reads a sequence of tokens as its input, and groups the tokens using the grammar rules. If the input is valid, the end result is that the entire token sequence reduces to a single grouping whose symbol is the grammar's start symbol. If we use a grammar for C, the entire input must be a `sequence of definitions and declarations'. If not, the parser reports a syntax error.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.2 From Formal Rules to Bison Input

A formal grammar is a mathematical construct. To define the language for Bison, you must write a file expressing the grammar in Bison syntax: a Bison grammar file. See section Bison Grammar Files.

A nonterminal symbol in the formal grammar is represented in Bison input as an identifier, like an identifier in C. By convention, it should be in lower case, such as expr, stmt or declaration.

The Bison representation for a terminal symbol is also called a token type. Token types as well can be represented as C-like identifiers. By convention, these identifiers should be upper case to distinguish them from nonterminals: for example, INTEGER, IDENTIFIER, IF or RETURN. A terminal symbol that stands for a particular keyword in the language should be named after that keyword converted to upper case. The terminal symbol error is reserved for error recovery. See section 3.2 Symbols, Terminal and Nonterminal.

A terminal symbol can also be represented as a character literal, just like a C character constant. You should do this whenever a token is just a single character (parenthesis, plus-sign, etc.): use that same character in a literal as the terminal symbol for that token.

A third way to represent a terminal symbol is with a C string constant containing several characters. See section 3.2 Symbols, Terminal and Nonterminal, for more information.

The grammar rules also have an expression in Bison syntax. For example, here is the Bison rule for a C return statement. The semicolon in quotes is a literal character token, representing part of the C syntax for the statement; the naked semicolon, and the colon, are Bison punctuation used in every rule.

 
stmt:   RETURN expr ';'
        ;

See section Syntax of Grammar Rules.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.3 Semantic Values

A formal grammar selects tokens only by their classifications: for example, if a rule mentions the terminal symbol `integer constant', it means that any integer constant is grammatically valid in that position. The precise value of the constant is irrelevant to how to parse the input: if `x+4' is grammatical then `x+1' or `x+3989' is equally grammatical.

But the precise value is very important for what the input means once it is parsed. A compiler is useless if it fails to distinguish between 4, 1 and 3989 as constants in the program! Therefore, each token in a Bison grammar has both a token type and a semantic value. See section Defining Language Semantics, for details.

The token type is a terminal symbol defined in the grammar, such as INTEGER, IDENTIFIER or ','. It tells everything you need to know to decide where the token may validly appear and how to group it with other tokens. The grammar rules know nothing about tokens except their types.

The semantic value has all the rest of the information about the meaning of the token, such as the value of an integer, or the name of an identifier. (A token such as ',' which is just punctuation doesn't need to have any semantic value.)

For example, an input token might be classified as token type INTEGER and have the semantic value 4. Another input token might have the same token type INTEGER but value 3989. When a grammar rule says that INTEGER is allowed, either of these tokens is acceptable because each is an INTEGER. When the parser accepts the token, it keeps track of the token's semantic value.

Each grouping can also have a semantic value as well as its nonterminal symbol. For example, in a calculator, an expression typically has a semantic value that is a number. In a compiler for a programming language, an expression typically has a semantic value that is a tree structure describing the meaning of the expression.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.4 Semantic Actions

In order to be useful, a program must do more than parse input; it must also produce some output based on the input. In a Bison grammar, a grammar rule can have an action made up of C statements. Each time the parser recognizes a match for that rule, the action is executed. See section 3.5.3 Actions. Most of the time, the purpose of an action is to compute the semantic value of the whole construct from the semantic values of its parts. For example, suppose we have a rule which says an expression can be the sum of two expressions. When the parser recognizes such a sum, each of the subexpressions has a semantic value which describes how it was built up. The action for this rule should create a similar sort of value for the newly recognized larger expression.

For example, here is a rule that says an expression can be the sum of two subexpressions:

 
expr: expr '+' expr   { $$ = $1 + $3; }
        ;

The action says how to produce the semantic value of the sum expression from the values of the two subexpressions.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.5 Bison Output: the Parser File

When you run Bison, you give it a Bison grammar file as input. The output is a C source file that parses the language described by the grammar. This file is called a Bison parser. Keep in mind that the Bison utility and the Bison parser are two distinct programs: the Bison utility is a program whose output is the Bison parser that becomes part of your program.

The job of the Bison parser is to group tokens into groupings according to the grammar rules--for example, to build identifiers and operators into expressions. As it does this, it runs the actions for the grammar rules it uses.

The tokens come from a function called the lexical analyzer that you must supply in some fashion (such as by writing it in C). The Bison parser calls the lexical analyzer each time it wants a new token. It doesn't know what is "inside" the tokens (though their semantic values may reflect this). Typically the lexical analyzer makes the tokens by parsing characters of text, but Bison does not depend on this. See section The Lexical Analyzer Function yylex.

The Bison parser file is C code which defines a function named yyparse which implements that grammar. This function does not make a complete C program: you must supply some additional functions. One is the lexical analyzer. Another is an error-reporting function which the parser calls to report an error. In addition, a complete C program must start with a function called main; you have to provide this, and arrange for it to call yyparse or the parser will never run. See section Parser C-Language Interface.

Aside from the token type names and the symbols in the actions you write, all variable and function names used in the Bison parser file begin with `yy' or `YY'. This includes interface functions such as the lexical analyzer function yylex, the error reporting function yyerror and the parser function yyparse itself. This also includes numerous identifiers used for internal purposes. Therefore, you should avoid using C identifiers starting with `yy' or `YY' in the Bison grammar file except for the ones defined in this manual.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.6 Stages in Using Bison

The actual language-design process using Bison, from grammar specification to a working compiler or interpreter, has these parts:

  1. Formally specify the grammar in a form recognized by Bison (see section Bison Grammar Files). For each grammatical rule in the language, describe the action that is to be taken when an instance of that rule is recognized. The action is described by a sequence of C statements.

  2. Write a lexical analyzer to process input and pass tokens to the parser. The lexical analyzer may be written by hand in C (see section The Lexical Analyzer Function yylex). It could also be produced using Lex, but the use of Lex is not discussed in this manual.

  3. Write a controlling function that calls the Bison-produced parser.

  4. Write error-reporting routines.

To turn this source code as written into a runnable program, you must follow these steps:

  1. Run Bison on the grammar to produce the parser.

  2. Compile the code output by Bison, as well as any other source files.

  3. Link the object files to produce the finished product.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1.7 The Overall Layout of a Bison Grammar

The input file for the Bison utility is a Bison grammar file. The general form of a Bison grammar file is as follows:

 
%{
C declarations
%}

Bison declarations

%%
Grammar rules
%%
Additional C code

The `%%', `%{' and `%}' are punctuation that appears in every Bison grammar file to separate the sections.

The C declarations may define types and variables used in the actions. You can also use preprocessor commands to define macros used there, and use #include to include header files that do any of these things.

The Bison declarations declare the names of the terminal and nonterminal symbols, and may also describe operator precedence and the data types of semantic values of various symbols.

The grammar rules define how to construct each nonterminal symbol from its parts.

The additional C code can contain any C code you want to use. Often the definition of the lexical analyzer yylex goes here, plus subroutines called by the actions in the grammar rules. In a simple program, all the rest of the program can go here.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_6.html0000644000175000017500000023106512633316117020323 0ustar frankfrank Bison 2.21.5: Grammar File
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3. Bison Grammar Files

Bison takes as input a context-free grammar specification and produces a C-language function that recognizes correct instances of the grammar.

The Bison grammar input file conventionally has a name ending in `.y'.

3.1 Outline of a Bison Grammar  Overall layout of the grammar file.
3.2 Symbols, Terminal and Nonterminal  Terminal and nonterminal symbols.
3.3 Syntax of Grammar Rules  How to write grammar rules.
3.4 Recursive Rules  Writing recursive rules.
3.5 Defining Language Semantics  Semantic values and actions.
3.6 Bison Declarations  All kinds of Bison declarations are described here.
3.7 Multiple Parsers in the Same Program  Putting more than one Bison parser in one program.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.1 Outline of a Bison Grammar

A Bison grammar file has four main sections, shown here with the appropriate delimiters:

 
%{
C declarations
%}

Bison declarations

%%
Grammar rules
%%

Additional C code

Comments enclosed in `/* ... */' may appear in any of the sections.

3.1.1 The C Declarations Section  Syntax and usage of the C declarations section.
3.1.2 The Bison Declarations Section  Syntax and usage of the Bison declarations section.
3.1.3 The Grammar Rules Section  Syntax and usage of the grammar rules section.
3.1.4 The Additional C Code Section  Syntax and usage of the additional C code section.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.1.1 The C Declarations Section

The C declarations section contains macro definitions and declarations of functions and variables that are used in the actions in the grammar rules. These are copied to the beginning of the parser file so that they precede the definition of yyparse. You can use `#include' to get the declarations from a header file. If you don't need any C declarations, you may omit the `%{' and `%}' delimiters that bracket this section.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.1.2 The Bison Declarations Section

The Bison declarations section contains declarations that define terminal and nonterminal symbols, specify precedence, and so on. In some simple grammars you may not need any declarations. See section Bison Declarations.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.1.3 The Grammar Rules Section

The grammar rules section contains one or more Bison grammar rules, and nothing else. See section Syntax of Grammar Rules.

There must always be at least one grammar rule, and the first `%%' (which precedes the grammar rules) may never be omitted even if it is the first thing in the file.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.1.4 The Additional C Code Section

The additional C code section is copied verbatim to the end of the parser file, just as the C declarations section is copied to the beginning. This is the most convenient place to put anything that you want to have in the parser file but which need not come before the definition of yyparse. For example, the definitions of yylex and yyerror often go here. See section Parser C-Language Interface.

If the last section is empty, you may omit the `%%' that separates it from the grammar rules.

The Bison parser itself contains many static variables whose names start with `yy' and many macros whose names start with `YY'. It is a good idea to avoid using any such names (except those documented in this manual) in the additional C code section of the grammar file.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.2 Symbols, Terminal and Nonterminal

Symbols in Bison grammars represent the grammatical classifications of the language.

A terminal symbol (also known as a token type) represents a class of syntactically equivalent tokens. You use the symbol in grammar rules to mean that a token in that class is allowed. The symbol is represented in the Bison parser by a numeric code, and the yylex function returns a token type code to indicate what kind of token has been read. You don't need to know what the code value is; you can use the symbol to stand for it.

A nonterminal symbol stands for a class of syntactically equivalent groupings. The symbol name is used in writing grammar rules. By convention, it should be all lower case.

Symbol names can contain letters, digits (not at the beginning), underscores and periods. Periods make sense only in nonterminals.

There are three ways of writing terminal symbols in the grammar:

  • A named token type is written with an identifier, like an identifier in C. By convention, it should be all upper case. Each such name must be defined with a Bison declaration such as %token. See section Token Type Names.

  • A character token type (or literal character token) is written in the grammar using the same syntax used in C for character constants; for example, '+' is a character token type. A character token type doesn't need to be declared unless you need to specify its semantic value data type (see section Data Types of Semantic Values), associativity, or precedence (see section Operator Precedence).

    By convention, a character token type is used only to represent a token that consists of that particular character. Thus, the token type '+' is used to represent the character `+' as a token. Nothing enforces this convention, but if you depart from it, your program will confuse other readers.

    All the usual escape sequences used in character literals in C can be used in Bison as well, but you must not use the null character as a character literal because its ASCII code, zero, is the code yylex returns for end-of-input (see section Calling Convention for yylex).

  • A literal string token is written like a C string constant; for example, "<=" is a literal string token. A literal string token doesn't need to be declared unless you need to specify its semantic value data type (see section 3.5.1 Data Types of Semantic Values), associativity, precedence (see section 5.3 Operator Precedence).

    You can associate the literal string token with a symbolic name as an alias, using the %token declaration (see section Token Declarations). If you don't do that, the lexical analyzer has to retrieve the token number for the literal string token from the yytname table (see section 4.2.1 Calling Convention for yylex).

    WARNING: literal string tokens do not work in Yacc.

    By convention, a literal string token is used only to represent a token that consists of that particular string. Thus, you should use the token type "<=" to represent the string `<=' as a token. Bison does not enforces this convention, but if you depart from it, people who read your program will be confused.

    All the escape sequences used in string literals in C can be used in Bison as well. A literal string token must contain two or more characters; for a token containing just one character, use a character token (see above).

How you choose to write a terminal symbol has no effect on its grammatical meaning. That depends only on where it appears in rules and on when the parser function returns that symbol.

The value returned by yylex is always one of the terminal symbols (or 0 for end-of-input). Whichever way you write the token type in the grammar rules, you write it the same way in the definition of yylex. The numeric code for a character token type is simply the ASCII code for the character, so yylex can use the identical character constant to generate the requisite code. Each named token type becomes a C macro in the parser file, so yylex can use the name to stand for the code. (This is why periods don't make sense in terminal symbols.) See section Calling Convention for yylex.

If yylex is defined in a separate file, you need to arrange for the token-type macro definitions to be available there. Use the `-d' option when you run Bison, so that it will write these macro definitions into a separate header file `name.tab.h' which you can include in the other source files that need it. See section Invoking Bison.

The symbol error is a terminal symbol reserved for error recovery (see section 6. Error Recovery); you shouldn't use it for any other purpose. In particular, yylex should never return this value.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.3 Syntax of Grammar Rules

A Bison grammar rule has the following general form:

 
result: components...
        ;

where result is the nonterminal symbol that this rule describes and components are various terminal and nonterminal symbols that are put together by this rule (see section 3.2 Symbols, Terminal and Nonterminal).

For example,

 
exp:      exp '+' exp
        ;

says that two groupings of type exp, with a `+' token in between, can be combined into a larger grouping of type exp.

Whitespace in rules is significant only to separate symbols. You can add extra whitespace as you wish.

Scattered among the components can be actions that determine the semantics of the rule. An action looks like this:

 
{C statements}

Usually there is only one action and it follows the components. See section 3.5.3 Actions.

Multiple rules for the same result can be written separately or can be joined with the vertical-bar character `|' as follows:

 
result:   rule1-components...
        | rule2-components...
        ...
        ;

They are still considered distinct rules even when joined in this way.

If components in a rule is empty, it means that result can match the empty string. For example, here is how to define a comma-separated sequence of zero or more exp groupings:

 
expseq:   /* empty */
        | expseq1
        ;

expseq1:  exp
        | expseq1 ',' exp
        ;

It is customary to write a comment `/* empty */' in each rule with no components.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.4 Recursive Rules

A rule is called recursive when its result nonterminal appears also on its right hand side. Nearly all Bison grammars need to use recursion, because that is the only way to define a sequence of any number of somethings. Consider this recursive definition of a comma-separated sequence of one or more expressions:

 
expseq1:  exp
        | expseq1 ',' exp
        ;

Since the recursive use of expseq1 is the leftmost symbol in the right hand side, we call this left recursion. By contrast, here the same construct is defined using right recursion:

 
expseq1:  exp
        | exp ',' expseq1
        ;

Any kind of sequence can be defined using either left recursion or right recursion, but you should always use left recursion, because it can parse a sequence of any number of elements with bounded stack space. Right recursion uses up space on the Bison stack in proportion to the number of elements in the sequence, because all the elements must be shifted onto the stack before the rule can be applied even once. See section The Bison Parser Algorithm , for further explanation of this.

Indirect or mutual recursion occurs when the result of the rule does not appear directly on its right hand side, but does appear in rules for other nonterminals which do appear on its right hand side.

For example:

 
expr:     primary
        | primary '+' primary
        ;

primary:  constant
        | '(' expr ')'
        ;

defines two mutually-recursive nonterminals, since each refers to the other.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5 Defining Language Semantics

The grammar rules for a language determine only the syntax. The semantics are determined by the semantic values associated with various tokens and groupings, and by the actions taken when various groupings are recognized.

For example, the calculator calculates properly because the value associated with each expression is the proper number; it adds properly because the action for the grouping `x + y' is to add the numbers associated with x and y.

3.5.1 Data Types of Semantic Values  Specifying one data type for all semantic values.
3.5.2 More Than One Value Type  Specifying several alternative data types.
3.5.3 Actions  An action is the semantic definition of a grammar rule.
3.5.4 Data Types of Values in Actions  Specifying data types for actions to operate on.
3.5.5 Actions in Mid-Rule  Most actions go at the end of a rule. This says when, why and how to use the exceptional action in the middle of a rule.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5.1 Data Types of Semantic Values

In a simple program it may be sufficient to use the same data type for the semantic values of all language constructs. This was true in the RPN and infix calculator examples (see section Reverse Polish Notation Calculator).

Bison's default is to use type int for all semantic values. To specify some other type, define YYSTYPE as a macro, like this:

 
#define YYSTYPE double

This macro definition must go in the C declarations section of the grammar file (see section Outline of a Bison Grammar).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5.2 More Than One Value Type

In most programs, you will need different data types for different kinds of tokens and groupings. For example, a numeric constant may need type int or long, while a string constant needs type char *, and an identifier might need a pointer to an entry in the symbol table.

To use more than one data type for semantic values in one parser, Bison requires you to do two things:

  • Specify the entire collection of possible data types, with the %union Bison declaration (see section The Collection of Value Types).

  • Choose one of those types for each symbol (terminal or nonterminal) for which semantic values are used. This is done for tokens with the %token Bison declaration (see section Token Type Names) and for groupings with the %type Bison declaration (see section Nonterminal Symbols).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5.3 Actions

An action accompanies a syntactic rule and contains C code to be executed each time an instance of that rule is recognized. The task of most actions is to compute a semantic value for the grouping built by the rule from the semantic values associated with tokens or smaller groupings.

An action consists of C statements surrounded by braces, much like a compound statement in C. It can be placed at any position in the rule; it is executed at that position. Most rules have just one action at the end of the rule, following all the components. Actions in the middle of a rule are tricky and used only for special purposes (see section Actions in Mid-Rule).

The C code in an action can refer to the semantic values of the components matched by the rule with the construct $n, which stands for the value of the nth component. The semantic value for the grouping being constructed is $$. (Bison translates both of these constructs into array element references when it copies the actions into the parser file.)

Here is a typical example:

 
exp:    ...
        | exp '+' exp
            { $$ = $1 + $3; }

This rule constructs an exp from two smaller exp groupings connected by a plus-sign token. In the action, $1 and $3 refer to the semantic values of the two component exp groupings, which are the first and third symbols on the right hand side of the rule. The sum is stored into $$ so that it becomes the semantic value of the addition-expression just recognized by the rule. If there were a useful semantic value associated with the `+' token, it could be referred to as $2.

If you don't specify an action for a rule, Bison supplies a default: $$ = $1. Thus, the value of the first symbol in the rule becomes the value of the whole rule. Of course, the default rule is valid only if the two data types match. There is no meaningful default action for an empty rule; every empty rule must have an explicit action unless the rule's value does not matter.

$n with n zero or negative is allowed for reference to tokens and groupings on the stack before those that match the current rule. This is a very risky practice, and to use it reliably you must be certain of the context in which the rule is applied. Here is a case in which you can use this reliably:

 
foo:      expr bar '+' expr  { ... }
        | expr bar '-' expr  { ... }
        ;

bar:      /* empty */
        { previous_expr = $0; }
        ;

As long as bar is used only in the fashion shown here, $0 always refers to the expr which precedes bar in the definition of foo.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5.4 Data Types of Values in Actions

If you have chosen a single data type for semantic values, the $$ and $n constructs always have that data type.

If you have used %union to specify a variety of data types, then you must declare a choice among these types for each terminal or nonterminal symbol that can have a semantic value. Then each time you use $$ or $n, its data type is determined by which symbol it refers to in the rule. In this example,

 
exp:    ...
        | exp '+' exp
            { $$ = $1 + $3; }

$1 and $3 refer to instances of exp, so they all have the data type declared for the nonterminal symbol exp. If $2 were used, it would have the data type declared for the terminal symbol '+', whatever that might be.

Alternatively, you can specify the data type when you refer to the value, by inserting `<type>' after the `$' at the beginning of the reference. For example, if you have defined types as shown here:

 
%union {
  int itype;
  double dtype;
}

then you can write $<itype>1 to refer to the first subunit of the rule as an integer, or $<dtype>1 to refer to it as a double.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5.5 Actions in Mid-Rule

Occasionally it is useful to put an action in the middle of a rule. These actions are written just like usual end-of-rule actions, but they are executed before the parser even recognizes the following components.

A mid-rule action may refer to the components preceding it using $n, but it may not refer to subsequent components because it is run before they are parsed.

The mid-rule action itself counts as one of the components of the rule. This makes a difference when there is another action later in the same rule (and usually there is another at the end): you have to count the actions along with the symbols when working out which number n to use in $n.

The mid-rule action can also have a semantic value. The action can set its value with an assignment to $$, and actions later in the rule can refer to the value using $n. Since there is no symbol to name the action, there is no way to declare a data type for the value in advance, so you must use the `$<...>' construct to specify a data type each time you refer to this value.

There is no way to set the value of the entire rule with a mid-rule action, because assignments to $$ do not have that effect. The only way to set the value for the entire rule is with an ordinary action at the end of the rule.

Here is an example from a hypothetical compiler, handling a let statement that looks like `let (variable) statement' and serves to create a variable named variable temporarily for the duration of statement. To parse this construct, we must put variable into the symbol table while statement is parsed, then remove it afterward. Here is how it is done:

 
stmt:   LET '(' var ')'
                { $<context>$ = push_context ();
                  declare_variable ($3); }
        stmt    { $$ = $6;
                  pop_context ($<context>5); }

As soon as `let (variable)' has been recognized, the first action is run. It saves a copy of the current semantic context (the list of accessible variables) as its semantic value, using alternative context in the data-type union. Then it calls declare_variable to add the new variable to that list. Once the first action is finished, the embedded statement stmt can be parsed. Note that the mid-rule action is component number 5, so the `stmt' is component number 6.

After the embedded statement is parsed, its semantic value becomes the value of the entire let-statement. Then the semantic value from the earlier action is used to restore the prior list of variables. This removes the temporary let-variable from the list so that it won't appear to exist while the rest of the program is parsed.

Taking action before a rule is completely recognized often leads to conflicts since the parser must commit to a parse in order to execute the action. For example, the following two rules, without mid-rule actions, can coexist in a working parser because the parser can shift the open-brace token and look at what follows before deciding whether there is a declaration or not:

 
compound: '{' declarations statements '}'
        | '{' statements '}'
        ;

But when we add a mid-rule action as follows, the rules become nonfunctional:

 
compound: { prepare_for_local_variables (); }
          '{' declarations statements '}'
        | '{' statements '}'
        ;

Now the parser is forced to decide whether to run the mid-rule action when it has read no farther than the open-brace. In other words, it must commit to using one rule or the other, without sufficient information to do it correctly. (The open-brace token is what is called the look-ahead token at this time, since the parser is still deciding what to do about it. See section Look-Ahead Tokens.)

You might think that you could correct the problem by putting identical actions into the two rules, like this:

 
compound: { prepare_for_local_variables (); }
          '{' declarations statements '}'
        | { prepare_for_local_variables (); }
          '{' statements '}'
        ;

But this does not help, because Bison does not realize that the two actions are identical. (Bison never tries to understand the C code in an action.)

If the grammar is such that a declaration can be distinguished from a statement by the first token (which is true in C), then one solution which does work is to put the action after the open-brace, like this:

 
compound: '{' { prepare_for_local_variables (); }
          declarations statements '}'
        | '{' statements '}'
        ;

Now the first token of the following declaration or statement, which would in any case tell Bison which rule to use, can still do so.

Another solution is to bury the action inside a nonterminal symbol which serves as a subroutine:

 
subroutine: /* empty */
          { prepare_for_local_variables (); }
        ;


compound: subroutine
          '{' declarations statements '}'
        | subroutine
          '{' statements '}'
        ;

Now Bison can execute the action in the rule for subroutine without deciding which rule for compound it will eventually use. Note that the action is now at the end of its rule. Any mid-rule action can be converted to an end-of-rule action in this way, and this is what Bison actually does to implement mid-rule actions.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6 Bison Declarations

The Bison declarations section of a Bison grammar defines the symbols used in formulating the grammar and the data types of semantic values. See section 3.2 Symbols, Terminal and Nonterminal.

All token type names (but not single-character literal tokens such as '+' and '*') must be declared. Nonterminal symbols must be declared if you need to specify which data type to use for the semantic value (see section More Than One Value Type).

The first rule in the file also specifies the start symbol, by default. If you want some other symbol to be the start symbol, you must declare it explicitly (see section Languages and Context-Free Grammars).

3.6.1 Token Type Names  Declaring terminal symbols.
3.6.2 Operator Precedence  Declaring terminals with precedence and associativity.
3.6.3 The Collection of Value Types  Declaring the set of all semantic value types.
3.6.4 Nonterminal Symbols  Declaring the choice of type for a nonterminal symbol.
3.6.5 Suppressing Conflict Warnings  Suppressing warnings about shift/reduce conflicts.
3.6.6 The Start-Symbol  Specifying the start symbol.
3.6.7 A Pure (Reentrant) Parser  Requesting a reentrant parser.
3.6.8 Bison Declaration Summary  Table of all Bison declarations.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6.1 Token Type Names

The basic way to declare a token type name (terminal symbol) is as follows:

 
%token name

Bison will convert this into a #define directive in the parser, so that the function yylex (if it is in this file) can use the name name to stand for this token type's code.

Alternatively, you can use %left, %right, or %nonassoc instead of %token, if you wish to specify precedence. See section Operator Precedence.

You can explicitly specify the numeric code for a token type by appending an integer value in the field immediately following the token name:

 
%token NUM 300

It is generally best, however, to let Bison choose the numeric codes for all token types. Bison will automatically select codes that don't conflict with each other or with ASCII characters.

In the event that the stack type is a union, you must augment the %token or other token declaration to include the data type alternative delimited by angle-brackets (see section More Than One Value Type).

For example:

 
%union {              /* define stack type */
  double val;
  symrec *tptr;
}
%token <val> NUM      /* define token NUM and its type */

You can associate a literal string token with a token type name by writing the literal string at the end of a %token declaration which declares the name. For example:

 
%token arrow "=>"

For example, a grammar for the C language might specify these names with equivalent literal string tokens:

 
%token  <operator>  OR      "||"
%token  <operator>  LE 134  "<="
%left  OR  "<="

Once you equate the literal string and the token name, you can use them interchangeably in further declarations or the grammar rules. The yylex function can use the token name or the literal string to obtain the token type code number (see section 4.2.1 Calling Convention for yylex).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6.2 Operator Precedence

Use the %left, %right or %nonassoc declaration to declare a token and specify its precedence and associativity, all at once. These are called precedence declarations. See section Operator Precedence, for general information on operator precedence.

The syntax of a precedence declaration is the same as that of %token: either

 
%left symbols...

or

 
%left <type> symbols...

And indeed any of these declarations serves the purposes of %token. But in addition, they specify the associativity and relative precedence for all the symbols:

  • The associativity of an operator op determines how repeated uses of the operator nest: whether `x op y op z' is parsed by grouping x with y first or by grouping y with z first. %left specifies left-associativity (grouping x with y first) and %right specifies right-associativity (grouping y with z first). %nonassoc specifies no associativity, which means that `x op y op z' is considered a syntax error.

  • The precedence of an operator determines how it nests with other operators. All the tokens declared in a single precedence declaration have equal precedence and nest together according to their associativity. When two tokens declared in different precedence declarations associate, the one declared later has the higher precedence and is grouped first.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6.3 The Collection of Value Types

The %union declaration specifies the entire collection of possible data types for semantic values. The keyword %union is followed by a pair of braces containing the same thing that goes inside a union in C.

For example:

 
%union {
  double val;
  symrec *tptr;
}

This says that the two alternative types are double and symrec *. They are given names val and tptr; these names are used in the %token and %type declarations to pick one of the types for a terminal or nonterminal symbol (see section Nonterminal Symbols).

Note that, unlike making a union declaration in C, you do not write a semicolon after the closing brace.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6.4 Nonterminal Symbols

When you use %union to specify multiple value types, you must declare the value type of each nonterminal symbol for which values are used. This is done with a %type declaration, like this:

 
%type <type> nonterminal...

Here nonterminal is the name of a nonterminal symbol, and type is the name given in the %union to the alternative that you want (see section The Collection of Value Types). You can give any number of nonterminal symbols in the same %type declaration, if they have the same value type. Use spaces to separate the symbol names.

You can also declare the value type of a terminal symbol. To do this, use the same <type> construction in a declaration for the terminal symbol. All kinds of token declarations allow <type>.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6.5 Suppressing Conflict Warnings

Bison normally warns if there are any conflicts in the grammar (see section Shift/Reduce Conflicts), but most real grammars have harmless shift/reduce conflicts which are resolved in a predictable way and would be difficult to eliminate. It is desirable to suppress the warning about these conflicts unless the number of conflicts changes. You can do this with the %expect declaration.

The declaration looks like this:

 
%expect n

Here n is a decimal integer. The declaration says there should be no warning if there are n shift/reduce conflicts and no reduce/reduce conflicts. The usual warning is given if there are either more or fewer conflicts, or if there are any reduce/reduce conflicts.

In general, using %expect involves these steps:

  • Compile your grammar without %expect. Use the `-v' option to get a verbose list of where the conflicts occur. Bison will also print the number of conflicts.

  • Check each of the conflicts to make sure that Bison's default resolution is what you really want. If not, rewrite the grammar and go back to the beginning.

  • Add an %expect declaration, copying the number n from the number which Bison printed.

Now Bison will stop annoying you about the conflicts you have checked, but it will warn you again if changes in the grammar result in additional conflicts.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6.6 The Start-Symbol

Bison assumes by default that the start symbol for the grammar is the first nonterminal specified in the grammar specification section. The programmer may override this restriction with the %start declaration as follows:

 
%start symbol


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6.7 A Pure (Reentrant) Parser

A reentrant program is one which does not alter in the course of execution; in other words, it consists entirely of pure (read-only) code. Reentrancy is important whenever asynchronous execution is possible; for example, a nonreentrant program may not be safe to call from a signal handler. In systems with multiple threads of control, a nonreentrant program must be called only within interlocks.

Normally, Bison generates a parser which is not reentrant. This is suitable for most uses, and it permits compatibility with YACC. (The standard YACC interfaces are inherently nonreentrant, because they use statically allocated variables for communication with yylex, including yylval and yylloc.)

Alternatively, you can generate a pure, reentrant parser. The Bison declaration %pure_parser says that you want the parser to be reentrant. It looks like this:

 
%pure_parser

The result is that the communication variables yylval and yylloc become local variables in yyparse, and a different calling convention is used for the lexical analyzer function yylex. See section Calling Conventions for Pure Parsers, for the details of this. The variable yynerrs also becomes local in yyparse (see section The Error Reporting Function yyerror). The convention for calling yyparse itself is unchanged.

Whether the parser is pure has nothing to do with the grammar rules. You can generate either a pure parser or a nonreentrant parser from any valid grammar.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6.8 Bison Declaration Summary

Here is a summary of all Bison declarations:

%union
Declare the collection of data types that semantic values may have (see section The Collection of Value Types).

%token
Declare a terminal symbol (token type name) with no precedence or associativity specified (see section Token Type Names).

%right
Declare a terminal symbol (token type name) that is right-associative (see section Operator Precedence).

%left
Declare a terminal symbol (token type name) that is left-associative (see section Operator Precedence).

%nonassoc
Declare a terminal symbol (token type name) that is nonassociative (using it in a way that would be associative is a syntax error) (see section Operator Precedence).

%type
Declare the type of semantic values for a nonterminal symbol (see section Nonterminal Symbols).

%start
Specify the grammar's start symbol (see section The Start-Symbol).

%expect
Declare the expected number of shift-reduce conflicts (see section Suppressing Conflict Warnings).

%pure_parser
Request a pure (reentrant) parser program (see section A Pure (Reentrant) Parser).

%no_lines
Don't generate any #line preprocessor commands in the parser file. Ordinarily Bison writes these commands in the parser file so that the C compiler and debuggers will associate errors and object code with your source file (the grammar file). This directive causes them to associate errors with the parser file, treating it an independent source file in its own right.

%raw
The output file `name.h' normally defines the tokens with Yacc-compatible token numbers. If this option is specified, the internal Bison numbers are used instead. (Yacc-compatible numbers start at 257 except for single character tokens; Bison assigns token numbers sequentially for all tokens starting at 3.)

%token_table
Generate an array of token names in the parser file. The name of the array is yytname; yytname[i] is the name of the token whose internal Bison token code number is i. The first three elements of yytname are always "$", "error", and "$illegal"; after these come the symbols defined in the grammar file.

For single-character literal tokens and literal string tokens, the name in the table includes the single-quote or double-quote characters: for example, "'+'" is a single-character literal and "\"<=\"" is a literal string token. All the characters of the literal string token appear verbatim in the string found in the table; even double-quote characters are not escaped. For example, if the token consists of three characters `*"*', its string in yytname contains `"*"*"'. (In C, that would be written as "\"*\"*\"").

When you specify %token_table, Bison also generates macro definitions for macros YYNTOKENS, YYNNTS, and YYNRULES, and YYNSTATES:

YYNTOKENS
The highest token number, plus one.
YYNNTS
The number of non-terminal symbols.
YYNRULES
The number of grammar rules,
YYNSTATES
The number of parser states (see section 5.5 Parser States).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.7 Multiple Parsers in the Same Program

Most programs that use Bison parse only one language and therefore contain only one Bison parser. But what if you want to parse more than one language with the same program? Then you need to avoid a name conflict between different definitions of yyparse, yylval, and so on.

The easy way to do this is to use the option `-p prefix' (see section Invoking Bison). This renames the interface functions and variables of the Bison parser to start with prefix instead of `yy'. You can use this to give each parser distinct names that do not conflict.

The precise list of symbols renamed is yyparse, yylex, yyerror, yynerrs, yylval, yychar and yydebug. For example, if you use `-p c', the names become cparse, clex, and so on.

All the other variables and macros associated with Bison are not renamed. These others are not global; there is no conflict if the same name is used in different parsers. For example, YYSTYPE is not renamed, but defining this in different ways in different parsers causes no trouble (see section Data Types of Semantic Values).

The `-p' option works by adding macro definitions to the beginning of the parser source file, defining yyparse as prefixparse, and so on. This effectively substitutes one name for the other in the entire parser file.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_ovr.html0000644000175000017500000000526712633316117020767 0ustar frankfrank Bison 2.21.5: Short Table of Contents
[Top] [Contents] [Index] [ ? ]

Short Table of Contents

Introduction
Conditions for Using Bison
GNU GENERAL PUBLIC LICENSE
1. The Concepts of Bison
2. Examples
3. Bison Grammar Files
4. Parser C-Language Interface
5. The Bison Parser Algorithm
6. Error Recovery
7. Handling Context Dependencies
8. Debugging Your Parser
9. Invoking Bison
A. Bison Symbols
B. Glossary
Index


This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_3.html0000644000175000017500000005634712633316117020330 0ustar frankfrank Bison 2.21.5: Copying
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

GNU GENERAL PUBLIC LICENSE

Version 2, June 1991

 
Copyright © 1989, 1991 Free Software Foundation, Inc.
59 Temple Place - Suite 330, Boston, MA 02111-1307, USA

Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Preamble

The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.

To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.

Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations.

Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.

The precise terms and conditions for copying, distribution and modification follow.

TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

  1. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you".

    Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does.

  2. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.

    You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.

  3. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:

    1. You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.

    2. You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License.

    3. If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.)

    These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.

    Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.

    In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.

  4. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:

    1. Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

    2. Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

    3. Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.)

    The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.

    If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.

  5. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.

  6. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.

  7. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.

  8. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.

    If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.

    It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.

    This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.

  9. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.

  10. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

    Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.

  11. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.

    NO WARRANTY

  12. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  13. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

END OF TERMS AND CONDITIONS


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

How to Apply These Terms to Your New Programs

If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.

To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.

 
one line to give the program's name and a brief idea of what it does.
Copyright (C) 19yy  name of author

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA.

Also add information on how to contact you by electronic and paper mail.

If the program is interactive, make it output a short notice like this when it starts in an interactive mode:

 
Gnomovision version 69, Copyright (C) 19yy name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details 
type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program.

You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names:

 
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.

signature of Ty Coon, 1 April 1989
Ty Coon, President of Vice

This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_13.html0000644000175000017500000003047412633316117020402 0ustar frankfrank Bison 2.21.5: Table of Symbols
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

A. Bison Symbols

error
A token name reserved for error recovery. This token may be used in grammar rules so as to allow the Bison parser to recognize an error in the grammar without halting the process. In effect, a sentence containing an error may be recognized as valid. On a parse error, the token error becomes the current look-ahead token. Actions corresponding to error are then executed, and the look-ahead token is reset to the token that originally caused the violation. See section 6. Error Recovery.

YYABORT
Macro to pretend that an unrecoverable syntax error has occurred, by making yyparse return 1 immediately. The error reporting function yyerror is not called. See section The Parser Function yyparse.

YYACCEPT
Macro to pretend that a complete utterance of the language has been read, by making yyparse return 0 immediately. See section The Parser Function yyparse.

YYBACKUP
Macro to discard a value from the parser stack and fake a look-ahead token. See section Special Features for Use in Actions.

YYERROR
Macro to pretend that a syntax error has just been detected: call yyerror and then perform normal error recovery if possible (see section 6. Error Recovery), or (if recovery is impossible) make yyparse return 1. See section 6. Error Recovery.

YYERROR_VERBOSE
Macro that you define with #define in the Bison declarations section to request verbose, specific error message strings when yyerror is called.

YYINITDEPTH
Macro for specifying the initial size of the parser stack. See section 5.8 Stack Overflow, and How to Avoid It.

YYLEX_PARAM
Macro for specifying an extra argument (or list of extra arguments) for yyparse to pass to yylex. See section Calling Conventions for Pure Parsers.

YYLTYPE
Macro for the data type of yylloc; a structure with four members. See section Textual Positions of Tokens.

yyltype
Default value for YYLTYPE.

YYMAXDEPTH
Macro for specifying the maximum size of the parser stack. See section 5.8 Stack Overflow, and How to Avoid It.

YYPARSE_PARAM
Macro for specifying the name of a parameter that yyparse should accept. See section Calling Conventions for Pure Parsers.

YYRECOVERING
Macro whose value indicates whether the parser is recovering from a syntax error. See section Special Features for Use in Actions.

YYSTYPE
Macro for the data type of semantic values; int by default. See section Data Types of Semantic Values.

yychar
External integer variable that contains the integer value of the current look-ahead token. (In a pure parser, it is a local variable within yyparse.) Error-recovery rule actions may examine this variable. See section Special Features for Use in Actions.

yyclearin
Macro used in error-recovery rule actions. It clears the previous look-ahead token. See section 6. Error Recovery.

yydebug
External integer variable set to zero by default. If yydebug is given a nonzero value, the parser will output information on input symbols and parser action. See section Debugging Your Parser.

yyerrok
Macro to cause parser to recover immediately to its normal mode after a parse error. See section 6. Error Recovery.

yyerror
User-supplied function to be called by yyparse on error. The function receives one argument, a pointer to a character string containing an error message. See section The Error Reporting Function yyerror.

yylex
User-supplied lexical analyzer function, called with no arguments to get the next token. See section The Lexical Analyzer Function yylex.

yylval
External variable in which yylex should place the semantic value associated with a token. (In a pure parser, it is a local variable within yyparse, and its address is passed to yylex.) See section Semantic Values of Tokens.

yylloc
External variable in which yylex should place the line and column numbers associated with a token. (In a pure parser, it is a local variable within yyparse, and its address is passed to yylex.) You can ignore this variable if you don't use the `@' feature in the grammar actions. See section Textual Positions of Tokens.

yynerrs
Global variable which Bison increments each time there is a parse error. (In a pure parser, it is a local variable within yyparse.) See section The Error Reporting Function yyerror.

yyparse
The parser function produced by Bison; call this function to start parsing. See section The Parser Function yyparse.

%left
Bison declaration to assign left associativity to token(s). See section Operator Precedence.

%no_lines
Bison declaration to avoid generating #line directives in the parser file. See section 3.6.8 Bison Declaration Summary.

%nonassoc
Bison declaration to assign nonassociativity to token(s). See section Operator Precedence.

%prec
Bison declaration to assign a precedence to a specific rule. See section Context-Dependent Precedence.

%pure_parser
Bison declaration to request a pure (reentrant) parser. See section A Pure (Reentrant) Parser.

%raw
Bison declaration to use Bison internal token code numbers in token tables instead of the usual Yacc-compatible token code numbers. See section 3.6.8 Bison Declaration Summary.

%right
Bison declaration to assign right associativity to token(s). See section Operator Precedence.

%start
Bison declaration to specify the start symbol. See section The Start-Symbol.

%token
Bison declaration to declare token(s) without specifying precedence. See section Token Type Names.

%token_table
Bison declaration to include a token name table in the parser file. See section 3.6.8 Bison Declaration Summary.

%type
Bison declaration to declare nonterminals. See section Nonterminal Symbols.

%union
Bison declaration to specify several possible data types for semantic values. See section The Collection of Value Types.

These are the punctuation and delimiters used in Bison input:

`%%'
Delimiter used to separate the grammar rule section from the Bison declarations section or the additional C code section. See section The Overall Layout of a Bison Grammar.

`%{ %}'
All code listed between `%{' and `%}' is copied directly to the output file uninterpreted. Such code forms the "C declarations" section of the input file. See section Outline of a Bison Grammar.

`/*...*/'
Comment delimiters, as in C.

`:'
Separates a rule's result from its components. See section Syntax of Grammar Rules.

`;'
Terminates a rule. See section Syntax of Grammar Rules.

`|'
Separates alternate rules for the same result nonterminal. See section Syntax of Grammar Rules.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_toc.html0000644000175000017500000002265312633316117020744 0ustar frankfrank Bison 2.21.5: Table of Contents
[Top] [Contents] [Index] [ ? ]

Table of Contents

Introduction
Conditions for Using Bison
GNU GENERAL PUBLIC LICENSE
Preamble
How to Apply These Terms to Your New Programs
1. The Concepts of Bison
1.1 Languages and Context-Free Grammars
1.2 From Formal Rules to Bison Input
1.3 Semantic Values
1.4 Semantic Actions
1.5 Bison Output: the Parser File
1.6 Stages in Using Bison
1.7 The Overall Layout of a Bison Grammar
2. Examples
2.1 Reverse Polish Notation Calculator
2.1.1 Declarations for rpcalc
2.1.2 Grammar Rules for rpcalc
2.1.2.1 Explanation of input
2.1.2.2 Explanation of line
2.1.2.3 Explanation of expr
2.1.3 The rpcalc Lexical Analyzer
2.1.4 The Controlling Function
2.1.5 The Error Reporting Routine
2.1.6 Running Bison to Make the Parser
2.1.7 Compiling the Parser File
2.2 Infix Notation Calculator: calc
2.3 Simple Error Recovery
2.4 Multi-Function Calculator: mfcalc
2.4.1 Declarations for mfcalc
2.4.2 Grammar Rules for mfcalc
2.4.3 The mfcalc Symbol Table
2.5 Exercises
3. Bison Grammar Files
3.1 Outline of a Bison Grammar
3.1.1 The C Declarations Section
3.1.2 The Bison Declarations Section
3.1.3 The Grammar Rules Section
3.1.4 The Additional C Code Section
3.2 Symbols, Terminal and Nonterminal
3.3 Syntax of Grammar Rules
3.4 Recursive Rules
3.5 Defining Language Semantics
3.5.1 Data Types of Semantic Values
3.5.2 More Than One Value Type
3.5.3 Actions
3.5.4 Data Types of Values in Actions
3.5.5 Actions in Mid-Rule
3.6 Bison Declarations
3.6.1 Token Type Names
3.6.2 Operator Precedence
3.6.3 The Collection of Value Types
3.6.4 Nonterminal Symbols
3.6.5 Suppressing Conflict Warnings
3.6.6 The Start-Symbol
3.6.7 A Pure (Reentrant) Parser
3.6.8 Bison Declaration Summary
3.7 Multiple Parsers in the Same Program
4. Parser C-Language Interface
4.1 The Parser Function yyparse
4.2 The Lexical Analyzer Function yylex
4.2.1 Calling Convention for yylex
4.2.2 Semantic Values of Tokens
4.2.3 Textual Positions of Tokens
4.2.4 Calling Conventions for Pure Parsers
4.3 The Error Reporting Function yyerror
4.4 Special Features for Use in Actions
5. The Bison Parser Algorithm
5.1 Look-Ahead Tokens
5.2 Shift/Reduce Conflicts
5.3 Operator Precedence
5.3.1 When Precedence is Needed
5.3.2 Specifying Operator Precedence
5.3.3 Precedence Examples
5.3.4 How Precedence Works
5.4 Context-Dependent Precedence
5.5 Parser States
5.6 Reduce/Reduce Conflicts
5.7 Mysterious Reduce/Reduce Conflicts
5.8 Stack Overflow, and How to Avoid It
6. Error Recovery
7. Handling Context Dependencies
7.1 Semantic Info in Token Types
7.2 Lexical Tie-ins
7.3 Lexical Tie-ins and Error Recovery
8. Debugging Your Parser
9. Invoking Bison
9.1 Bison Options
9.2 Option Cross Key
9.3 Invoking Bison under VMS
A. Bison Symbols
B. Glossary
Index


This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_15.html0000644000175000017500000006275512633316117020413 0ustar frankfrank Bison 2.21.5: Index
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Index

Jump to:   $   %   @   |  
A   B   C   D   E   F   G   I   L   M   N   O   P   R   S   T   U   V   W   Y  

Index Entry Section

$
$$3.5.3 Actions
$n3.5.3 Actions

%
%expect3.6.5 Suppressing Conflict Warnings
%left5.3.2 Specifying Operator Precedence
%nonassoc5.3.2 Specifying Operator Precedence
%prec5.4 Context-Dependent Precedence
%pure_parser3.6.7 A Pure (Reentrant) Parser
%right5.3.2 Specifying Operator Precedence
%start3.6.6 The Start-Symbol
%token3.6.1 Token Type Names
%type3.6.4 Nonterminal Symbols
%union3.6.3 The Collection of Value Types

@
@n4.4 Special Features for Use in Actions

|
|3.3 Syntax of Grammar Rules

A
action3.5.3 Actions
action data types3.5.4 Data Types of Values in Actions
action features summary4.4 Special Features for Use in Actions
actions in mid-rule3.5.5 Actions in Mid-Rule
actions, semantic1.4 Semantic Actions
additional C code section3.1.4 The Additional C Code Section
algorithm of parser5. The Bison Parser Algorithm
associativity5.3.1 When Precedence is Needed

B
Backus-Naur form1.1 Languages and Context-Free Grammars
Bison declaration summary3.6.8 Bison Declaration Summary
Bison declarations3.6 Bison Declarations
Bison declarations (introduction)3.1.2 The Bison Declarations Section
Bison grammar1.2 From Formal Rules to Bison Input
Bison invocation9. Invoking Bison
Bison parser1.5 Bison Output: the Parser File
Bison parser algorithm5. The Bison Parser Algorithm
Bison symbols, table ofA. Bison Symbols
Bison utility1.5 Bison Output: the Parser File
BNF1.1 Languages and Context-Free Grammars

C
C code, section for additional3.1.4 The Additional C Code Section
C declarations section3.1.1 The C Declarations Section
C-language interface4. Parser C-Language Interface
calc2.2 Infix Notation Calculator: calc
calculator, infix notation2.2 Infix Notation Calculator: calc
calculator, multi-function2.4 Multi-Function Calculator: mfcalc
calculator, simple2.1 Reverse Polish Notation Calculator
character token3.2 Symbols, Terminal and Nonterminal
compiling the parser2.1.7 Compiling the Parser File
conflicts5.2 Shift/Reduce Conflicts
conflicts, reduce/reduce5.6 Reduce/Reduce Conflicts
conflicts, suppressing warnings of3.6.5 Suppressing Conflict Warnings
context-dependent precedence5.4 Context-Dependent Precedence
context-free grammar1.1 Languages and Context-Free Grammars
controlling function2.1.4 The Controlling Function

D
dangling else5.2 Shift/Reduce Conflicts
data types in actions3.5.4 Data Types of Values in Actions
data types of semantic values3.5.1 Data Types of Semantic Values
debugging8. Debugging Your Parser
declaration summary3.6.8 Bison Declaration Summary
declarations, Bison3.6 Bison Declarations
declarations, Bison (introduction)3.1.2 The Bison Declarations Section
declarations, C3.1.1 The C Declarations Section
declaring literal string tokens3.6.1 Token Type Names
declaring operator precedence3.6.2 Operator Precedence
declaring the start symbol3.6.6 The Start-Symbol
declaring token type names3.6.1 Token Type Names
declaring value types3.6.3 The Collection of Value Types
declaring value types, nonterminals3.6.4 Nonterminal Symbols
default action3.5.3 Actions
default data type3.5.1 Data Types of Semantic Values
default stack limit5.8 Stack Overflow, and How to Avoid It
default start symbol3.6.6 The Start-Symbol
defining language semantics3.5 Defining Language Semantics

E
else, dangling5.2 Shift/Reduce Conflicts
error6. Error Recovery
error recovery6. Error Recovery
error recovery, simple2.3 Simple Error Recovery
error reporting function4.3 The Error Reporting Function yyerror
error reporting routine2.1.5 The Error Reporting Routine
examples, simple2. Examples
exercises2.5 Exercises

F
file format1.7 The Overall Layout of a Bison Grammar
finite-state machine5.5 Parser States
formal grammar1.2 From Formal Rules to Bison Input
format of grammar file1.7 The Overall Layout of a Bison Grammar

G
glossaryB. Glossary
grammar file1.7 The Overall Layout of a Bison Grammar
grammar rule syntax3.3 Syntax of Grammar Rules
grammar rules section3.1.3 The Grammar Rules Section
grammar, Bison1.2 From Formal Rules to Bison Input
grammar, context-free1.1 Languages and Context-Free Grammars
grouping, syntactic1.1 Languages and Context-Free Grammars

I
infix notation calculator2.2 Infix Notation Calculator: calc
interface4. Parser C-Language Interface
introductionIntroduction
invoking Bison9. Invoking Bison
invoking Bison under VMS9.3 Invoking Bison under VMS

L
LALR(1)5.7 Mysterious Reduce/Reduce Conflicts
language semantics, defining3.5 Defining Language Semantics
layout of Bison grammar1.7 The Overall Layout of a Bison Grammar
left recursion3.4 Recursive Rules
lexical analyzer4.2 The Lexical Analyzer Function yylex
lexical analyzer, purpose1.5 Bison Output: the Parser File
lexical analyzer, writing2.1.3 The rpcalc Lexical Analyzer
lexical tie-in7.2 Lexical Tie-ins
literal string token3.2 Symbols, Terminal and Nonterminal
literal token3.2 Symbols, Terminal and Nonterminal
look-ahead token5.1 Look-Ahead Tokens
LR(1)5.7 Mysterious Reduce/Reduce Conflicts

Jump to:   $   %   @   |  
A   B   C   D   E   F   G   I   L   M   N   O   P   R   S   T   U   V   W   Y  

[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_16.html0000644000175000017500000006030412633316117020400 0ustar frankfrank Bison 2.21.5: Index: M -- Y
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

Index: M -- Y

Jump to:   $   %   @   |  
A   B   C   D   E   F   G   I   L   M   N   O   P   R   S   T   U   V   W   Y  

Index Entry Section

M
main function in simple example2.1.4 The Controlling Function
mfcalc2.4 Multi-Function Calculator: mfcalc
mid-rule actions3.5.5 Actions in Mid-Rule
multi-character literal3.2 Symbols, Terminal and Nonterminal
multi-function calculator2.4 Multi-Function Calculator: mfcalc
mutual recursion3.4 Recursive Rules

N
nonterminal symbol3.2 Symbols, Terminal and Nonterminal

O
operator precedence5.3 Operator Precedence
operator precedence, declaring3.6.2 Operator Precedence
options for invoking Bison9. Invoking Bison
overflow of parser stack5.8 Stack Overflow, and How to Avoid It

P
parse error4.3 The Error Reporting Function yyerror
parser1.5 Bison Output: the Parser File
parser stack5. The Bison Parser Algorithm
parser stack overflow5.8 Stack Overflow, and How to Avoid It
parser state5.5 Parser States
polish notation calculator2.1 Reverse Polish Notation Calculator
precedence declarations3.6.2 Operator Precedence
precedence of operators5.3 Operator Precedence
precedence, context-dependent5.4 Context-Dependent Precedence
precedence, unary operator5.4 Context-Dependent Precedence
preventing warnings about conflicts3.6.5 Suppressing Conflict Warnings
pure parser3.6.7 A Pure (Reentrant) Parser

R
recovery from errors6. Error Recovery
recursive rule3.4 Recursive Rules
reduce/reduce conflict5.6 Reduce/Reduce Conflicts
reduction5. The Bison Parser Algorithm
reentrant parser3.6.7 A Pure (Reentrant) Parser
reverse polish notation2.1 Reverse Polish Notation Calculator
right recursion3.4 Recursive Rules
rpcalc2.1 Reverse Polish Notation Calculator
rule syntax3.3 Syntax of Grammar Rules
rules section for grammar3.1.3 The Grammar Rules Section
running Bison (introduction)2.1.6 Running Bison to Make the Parser

S
semantic actions1.4 Semantic Actions
semantic value1.3 Semantic Values
semantic value type3.5.1 Data Types of Semantic Values
shift/reduce conflicts5.2 Shift/Reduce Conflicts
shifting5. The Bison Parser Algorithm
simple examples2. Examples
single-character literal3.2 Symbols, Terminal and Nonterminal
stack overflow5.8 Stack Overflow, and How to Avoid It
stack, parser5. The Bison Parser Algorithm
stages in using Bison1.6 Stages in Using Bison
start symbol1.1 Languages and Context-Free Grammars
start symbol, declaring3.6.6 The Start-Symbol
state (of parser)5.5 Parser States
string token3.2 Symbols, Terminal and Nonterminal
summary, action features4.4 Special Features for Use in Actions
summary, Bison declaration3.6.8 Bison Declaration Summary
suppressing conflict warnings3.6.5 Suppressing Conflict Warnings
symbol3.2 Symbols, Terminal and Nonterminal
symbol table example2.4.3 The mfcalc Symbol Table
symbols (abstract)1.1 Languages and Context-Free Grammars
symbols in Bison, table ofA. Bison Symbols
syntactic grouping1.1 Languages and Context-Free Grammars
syntax error4.3 The Error Reporting Function yyerror
syntax of grammar rules3.3 Syntax of Grammar Rules

T
terminal symbol3.2 Symbols, Terminal and Nonterminal
token1.1 Languages and Context-Free Grammars
token type3.2 Symbols, Terminal and Nonterminal
token type names, declaring3.6.1 Token Type Names
tracing the parser8. Debugging Your Parser

U
unary operator precedence5.4 Context-Dependent Precedence
using Bison1.6 Stages in Using Bison

V
value type, semantic3.5.1 Data Types of Semantic Values
value types, declaring3.6.3 The Collection of Value Types
value types, nonterminals, declaring3.6.4 Nonterminal Symbols
value, semantic1.3 Semantic Values
VMS9.3 Invoking Bison under VMS

W
warnings, preventing3.6.5 Suppressing Conflict Warnings
writing a lexical analyzer2.1.3 The rpcalc Lexical Analyzer

Y
YYABORT4.1 The Parser Function yyparse
YYACCEPT4.1 The Parser Function yyparse
YYBACKUP4.4 Special Features for Use in Actions
yychar5.1 Look-Ahead Tokens
yyclearin6. Error Recovery
YYDEBUG8. Debugging Your Parser
yydebug8. Debugging Your Parser
YYEMPTY4.4 Special Features for Use in Actions
yyerrok6. Error Recovery
YYERROR4.4 Special Features for Use in Actions
yyerror4.3 The Error Reporting Function yyerror
YYERROR_VERBOSE4.3 The Error Reporting Function yyerror
YYINITDEPTH5.8 Stack Overflow, and How to Avoid It
yylex4.2 The Lexical Analyzer Function yylex
YYLEX_PARAM4.2.4 Calling Conventions for Pure Parsers
yylloc4.2.3 Textual Positions of Tokens
YYLTYPE4.2.3 Textual Positions of Tokens
yylval4.2.2 Semantic Values of Tokens
YYMAXDEPTH5.8 Stack Overflow, and How to Avoid It
yynerrs4.3 The Error Reporting Function yyerror
yyparse4.1 The Parser Function yyparse
YYPARSE_PARAM4.2.4 Calling Conventions for Pure Parsers
YYPRINT8. Debugging Your Parser
YYRECOVERING6. Error Recovery

Jump to:   $   %   @   |  
A   B   C   D   E   F   G   I   L   M   N   O   P   R   S   T   U   V   W   Y  


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/documentation/html/bison_10.html0000644000175000017500000003506412633316117020377 0ustar frankfrank Bison 2.21.5: Context Dependency
[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7. Handling Context Dependencies

The Bison paradigm is to parse tokens first, then group them into larger syntactic units. In many languages, the meaning of a token is affected by its context. Although this violates the Bison paradigm, certain techniques (known as kludges) may enable you to write Bison parsers for such languages.

7.1 Semantic Info in Token Types  Token parsing can depend on the semantic context.
7.2 Lexical Tie-ins  Token parsing can depend on the syntactic context.
7.3 Lexical Tie-ins and Error Recovery  Lexical tie-ins have implications for how error recovery rules must be written.

(Actually, "kludge" means any technique that gets its job done but is neither clean nor robust.)


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.1 Semantic Info in Token Types

The C language has a context dependency: the way an identifier is used depends on what its current meaning is. For example, consider this:

 
foo (x);

This looks like a function call statement, but if foo is a typedef name, then this is actually a declaration of x. How can a Bison parser for C decide how to parse this input?

The method used in GNU C is to have two different token types, IDENTIFIER and TYPENAME. When yylex finds an identifier, it looks up the current declaration of the identifier in order to decide which token type to return: TYPENAME if the identifier is declared as a typedef, IDENTIFIER otherwise.

The grammar rules can then express the context dependency by the choice of token type to recognize. IDENTIFIER is accepted as an expression, but TYPENAME is not. TYPENAME can start a declaration, but IDENTIFIER cannot. In contexts where the meaning of the identifier is not significant, such as in declarations that can shadow a typedef name, either TYPENAME or IDENTIFIER is accepted--there is one rule for each of the two token types.

This technique is simple to use if the decision of which kinds of identifiers to allow is made at a place close to where the identifier is parsed. But in C this is not always so: C allows a declaration to redeclare a typedef name provided an explicit type has been specified earlier:

 
typedef int foo, bar, lose;
static foo (bar);        /* redeclare bar as static variable */
static int foo (lose);   /* redeclare foo as function */

Unfortunately, the name being declared is separated from the declaration construct itself by a complicated syntactic structure--the "declarator".

As a result, the part of Bison parser for C needs to be duplicated, with all the nonterminal names changed: once for parsing a declaration in which a typedef name can be redefined, and once for parsing a declaration in which that can't be done. Here is a part of the duplication, with actions omitted for brevity:

 
initdcl:
          declarator maybeasm '='
          init
        | declarator maybeasm
        ;

notype_initdcl:
          notype_declarator maybeasm '='
          init
        | notype_declarator maybeasm
        ;

Here initdcl can redeclare a typedef name, but notype_initdcl cannot. The distinction between declarator and notype_declarator is the same sort of thing.

There is some similarity between this technique and a lexical tie-in (described next), in that information which alters the lexical analysis is changed during parsing by other parts of the program. The difference is here the information is global, and is used for other purposes in the program. A true lexical tie-in has a special-purpose flag controlled by the syntactic context.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.2 Lexical Tie-ins

One way to handle context-dependency is the lexical tie-in: a flag which is set by Bison actions, whose purpose is to alter the way tokens are parsed.

For example, suppose we have a language vaguely like C, but with a special construct `hex (hex-expr)'. After the keyword hex comes an expression in parentheses in which all integers are hexadecimal. In particular, the token `a1b' must be treated as an integer rather than as an identifier if it appears in that context. Here is how you can do it:

 
%{
int hexflag;
%}
%%
...
expr:   IDENTIFIER
        | constant
        | HEX '('
                { hexflag = 1; }
          expr ')'
                { hexflag = 0;
                   $$ = $4; }
        | expr '+' expr
                { $$ = make_sum ($1, $3); }
        ...
        ;

constant:
          INTEGER
        | STRING
        ;

Here we assume that yylex looks at the value of hexflag; when it is nonzero, all integers are parsed in hexadecimal, and tokens starting with letters are parsed as integers if possible.

The declaration of hexflag shown in the C declarations section of the parser file is needed to make it accessible to the actions (see section The C Declarations Section). You must also write the code in yylex to obey the flag.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.3 Lexical Tie-ins and Error Recovery

Lexical tie-ins make strict demands on any error recovery rules you have. See section 6. Error Recovery.

The reason for this is that the purpose of an error recovery rule is to abort the parsing of one construct and resume in some larger construct. For example, in C-like languages, a typical error recovery rule is to skip tokens until the next semicolon, and then start a new statement, like this:

 
stmt:   expr ';'
        | IF '(' expr ')' stmt { ... }
        ...
        error ';'
                { hexflag = 0; }
        ;

If there is a syntax error in the middle of a `hex (expr)' construct, this error rule will apply, and then the action for the completed `hex (expr)' will never run. So hexflag would remain set for the entire rest of the input, or until the next hex keyword, causing identifiers to be misinterpreted as integers.

To avoid this problem the error recovery rule itself clears hexflag.

There may also be an error recovery rule that works within expressions. For example, there could be a rule which applies within parentheses and skips to the close-parenthesis:

 
expr:   ...
        | '(' expr ')'
                { $$ = $2; }
        | '(' error ')'
        ...

If this rule acts within the hex construct, it is not going to abort that construct (since it applies to an inner level of parentheses within the construct). Therefore, it should not clear the flag: the rest of the hex construct should be parsed with the flag still in effect.

What if there is an error recovery rule which might abort out of the hex construct or might not, depending on circumstances? There is no way you can write the action to determine whether a hex construct is being aborted or not. So if you are using a lexical tie-in, you had better make sure your error recovery rules are not of this kind. Each rule must be such that you can be sure that it always will, or always won't, have to clear the flag.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Frank B. Brokken on January, 28 2005 using texi2html bisonc++-4.13.01/element/0000755000175000017500000000000012633316117013703 5ustar frankfrankbisonc++-4.13.01/element/destructor.cc0000644000175000017500000000006012633316117016404 0ustar frankfrank#include "element.ih" Element::~Element() { } bisonc++-4.13.01/element/element.h0000644000175000017500000000266012633316117015511 0ustar frankfrank#ifndef _INCLUDED_ELEMENT_ #define _INCLUDED_ELEMENT_ #include #include // Placeholder in FirstSet for Symbols, to prevent circular class // dependencies // // Also defines basic display functionality: // Display-forms of symbols: // Symbol Type operator<< literal() value() // display() // ------------------------------------------------------------------ // Terminal (printable) 'x' 'x' 123 // (if so provided) // Terminal (nonprintable) '\xdd' '\xdd' 123 // (if so provided) // Terminal (Symbolic) `NAME' NAME 281 // NonTerm `name' name 12 // ------------------------------------------------------------------ // original // Terminal::insert inserts the symbol followed by (= value) class Element { friend std::ostream &operator<<(std::ostream &out, Element const *el); public: virtual ~Element(); size_t value() const; private: virtual size_t v_value() const = 0; virtual std::ostream &insert(std::ostream &out) const = 0; }; inline size_t Element::value() const { return v_value(); } inline std::ostream &operator<<(std::ostream &out, Element const *element) { return element->insert(out); } #endif bisonc++-4.13.01/element/element.ih0000644000175000017500000000005412633316117015655 0ustar frankfrank#include "element.h" using namespace std; bisonc++-4.13.01/element/frame0000644000175000017500000000004512633316117014717 0ustar frankfrank#include "element.ih" Element:: { } bisonc++-4.13.01/enumsolution/0000755000175000017500000000000012633316117015013 5ustar frankfrankbisonc++-4.13.01/enumsolution/enumsolution.h0000644000175000017500000000026512633316117017730 0ustar frankfrank#ifndef _INCLUDED_ENUMSOLUTION_ #define _INCLUDED_ENUMSOLUTION_ namespace Enum { enum Solution { UNDECIDED, SHIFT, REDUCE, }; } #endif bisonc++-4.13.01/firstset/0000755000175000017500000000000012633316117014115 5ustar frankfrankbisonc++-4.13.01/firstset/operatorplusis1.cc0000644000175000017500000000024012633316117017574 0ustar frankfrank#include "firstset.ih" FirstSet &FirstSet::operator+=(FirstSet const &other) { *this += other.set(); d_epsilon |= other.d_epsilon; return *this; } bisonc++-4.13.01/firstset/firstset1.cc0000644000175000017500000000023012633316117016343 0ustar frankfrank#include "firstset.ih" FirstSet::FirstSet(Element const *terminal) : std::set(&terminal, &terminal + 1), d_epsilon(false) {} bisonc++-4.13.01/firstset/operatorplusis2.cc0000644000175000017500000000027312633316117017603 0ustar frankfrank#include "firstset.ih" FirstSet &FirstSet::operator+=(std::set const &terminalSet) { Baseclass::insert(terminalSet.begin(), terminalSet.end()); return *this; } bisonc++-4.13.01/firstset/oinsert.cc0000644000175000017500000000046212633316117016111 0ustar frankfrank#include "firstset.ih" ostream &FirstSet::insert(ostream &out) const { out << "{ "; // passing Element const * values copy(begin(), end(), ostream_iterator(out, " ")); if (d_epsilon) out << " "; out << "}"; return out; } bisonc++-4.13.01/firstset/firstset.ih0000644000175000017500000000017612633316117016306 0ustar frankfrank#include "firstset.h" #include #include #include #include using namespace std; bisonc++-4.13.01/firstset/frame0000644000175000017500000000005712633316117015134 0ustar frankfrank#include "firstset.ih" FirstSet::() const { } bisonc++-4.13.01/firstset/firstset.h0000644000175000017500000000364312633316117016137 0ustar frankfrank#ifndef _INCLUDED_FIRSTSET_ #define _INCLUDED_FIRSTSET_ #include #include #include "../element/element.h" class FirstSet: private std::set { typedef std::set Inherit; friend std::ostream &operator<<(std::ostream &out, FirstSet const &fset); bool d_epsilon; // true if epsilon (the empty set indicator) // is in {First} protected: typedef std::set Baseclass; FirstSet(Element const **begin, Element const **end); public: using Inherit::find; using Inherit::begin; using Inherit::end; using Inherit::size; FirstSet(); FirstSet(Element const *terminal); FirstSet &operator+=(FirstSet const &other); FirstSet &operator+=(std::set const &terminalSet); bool empty() const; bool hasEpsilon() const; bool operator==(FirstSet const &other) const; size_t setSize() const; void addEpsilon(); void rmEpsilon(); std::set const &set() const; private: std::ostream &insert(std::ostream &out) const; }; inline std::set const &FirstSet::set() const { return *this; } inline FirstSet::FirstSet(Element const **begin, Element const **end) : Baseclass(begin, end), d_epsilon(false) {} inline FirstSet::FirstSet() : d_epsilon(false) {} inline size_t FirstSet::setSize() const { return size() + d_epsilon; } inline void FirstSet::rmEpsilon() { d_epsilon = false; } inline void FirstSet::addEpsilon() { d_epsilon = true; } inline bool FirstSet::hasEpsilon() const { return d_epsilon; } inline bool FirstSet::empty() const { return !d_epsilon && Baseclass::empty(); } inline std::ostream &operator<<(std::ostream &out, FirstSet const &firstSet) { return firstSet.insert(out); } #endif bisonc++-4.13.01/generator/0000755000175000017500000000000012633316117014240 5ustar frankfrankbisonc++-4.13.01/generator/ifinsertstype.cc0000644000175000017500000000017512633316117017462 0ustar frankfrank#include "generator.ih" void Generator::ifInsertStype(bool &accept) const { accept = d_arg.option(0, "insert-stype"); } bisonc++-4.13.01/generator/ltypestack.cc0000644000175000017500000000032212633316117016727 0ustar frankfrank#include "generator.ih" void Generator::ltypeStack(ostream &out) const { if (!d_options.lspNeeded()) return; key(out); out << " std::vector d_locationStack__;\n"; } bisonc++-4.13.01/generator/defaultactionreturn.cc0000644000175000017500000000053612633316117020635 0ustar frankfrank#include "generator.ih" void Generator::defaultActionReturn(ostream &out) const { if (d_arg.option('N')) return; key(out); out << " // save default non-nested block $$\n" " if (int size = s_productionInfo[production].d_size)\n" " d_val__ = d_vsp__[1 - size];\n"; } bisonc++-4.13.01/generator/atend.cc0000644000175000017500000000013212633316117015636 0ustar frankfrank#include "generator.ih" void Generator::atEnd(bool &accept) const { accept = true; } bisonc++-4.13.01/generator/implementationheader.cc0000644000175000017500000000143612633316117020751 0ustar frankfrank#include "generator.ih" // All members of the parser class should include the implementation header as // the only #included file. The implementation header should perform all // required preprocessor actions for the compilation of the Parser's class // members // The implementation header header is not rewritten by // bisonc++ once it's available void Generator::implementationHeader() const { string const &implementationHeader = d_options.implementationHeader(); if (d_stat.set(implementationHeader)) // do not overwrite an existing return; // implementation header ofstream out; ifstream in; Exception::open(in, d_options.implementationSkeleton()); Exception::open(out, implementationHeader); filter(in, out); } bisonc++-4.13.01/generator/threading.cc0000644000175000017500000000020612633316117016512 0ustar frankfrank#include "generator.ih" void Generator::threading(ostream &out) const { key(out); insert(out, 4, "threading.in"); } bisonc++-4.13.01/generator/baseclassheader.cc0000644000175000017500000000173512633316117017666 0ustar frankfrank#include "generator.ih" // The base class header contains some of the member functions hat are // inherited by the parser class itself. It also defines the tokens to be // returned by the lexical scanner. // Writing a base class header may be prevented by the --no-baseclass-header // option. Otherwise, it's rewritten each time bisonc++ is called to process a // grammar. Providing the --no-baseclass-header option should not be // necessary, as all additional functionality should be defined in the // parser's class header. void Generator::baseClassHeader() const { if ( d_arg.option(0, "no-baseclass-header") || ( d_arg.option(0, "dont-rewrite-baseclass-header") && d_stat.set(d_options.baseClassHeader()) ) ) return; ofstream out; ifstream in; Exception::open(in, d_options.baseClassSkeleton()); Exception::open(out, d_options.baseClassHeader()); filter(in, out); } bisonc++-4.13.01/generator/debuginit.cc0000644000175000017500000000022712633316117016522 0ustar frankfrank#include "generator.ih" void Generator::debugInit(ostream &out) const { key(out); out << "d_debug__(" << boolalpha << d_debug << "),\n"; } bisonc++-4.13.01/generator/TODO0000644000175000017500000000006012633316117014724 0ustar frankfrank@else @ Replace unordered_map by linear_map bisonc++-4.13.01/generator/bolat.cc0000644000175000017500000000060112633316117015645 0ustar frankfrank#include "generator.ih" void Generator::bolAt(ostream &out, string &line, istream &in, bool &accept) const { auto iter = find(line, 0, s_atBol); if (iter != s_atBol.end()) (this->*iter->function)(accept); else wmsg << "Ignoring unsupported `" << line << "' in .in file" << endl; } bisonc++-4.13.01/generator/ltypepop.cc0000644000175000017500000000030712633316117016423 0ustar frankfrank#include "generator.ih" void Generator::ltypePop(ostream &out) const { if (!d_options.lspNeeded()) return; key(out); out << "d_lsp__ = &d_locationStack__[d_stackIdx__];\n"; } bisonc++-4.13.01/generator/debug.cc0000644000175000017500000000064312633316117015640 0ustar frankfrank#include "generator.ih" void Generator::debug(ostream &out) const { if (!d_debug) return; key(out); out << "if (d_debug__)\n" << setw(d_indent + 4) << "" << flush; if (*d_line.rbegin() != '+') out << "s_out__ << " << d_line << " << \"\\n\" << dflush__;\n"; else { d_line.resize(d_line.length() - 1); out << "s_out__ << " << d_line << ";\n"; } } bisonc++-4.13.01/generator/scannerobject.cc0000644000175000017500000000032712633316117017371 0ustar frankfrank#include "generator.ih" void Generator::scannerObject(ostream &out) const { if (d_options.scannerInclude().empty()) return; key(out); out << d_options.scannerClassName() << " d_scanner;\n"; } bisonc++-4.13.01/generator/insert.cc0000644000175000017500000000110012633316117016043 0ustar frankfrank#include "generator.ih" void Generator::insert(ostream &out) const { istringstream istr(d_line); istr >> d_key >> d_indent; if (!istr) { d_indent = 0; istr.clear(); } istr >> d_key; // extract the insertion target getline(istr, d_line); // and store the remainder of the line MapConstIter iter = s_insert.find(d_key); if (iter != s_insert.end()) (this->*iter->second)(out); else wmsg << "Ignoring unsupported `$insert " << d_key << " ...' in skeleton file" << endl; } bisonc++-4.13.01/generator/polymorphic.cc0000644000175000017500000000101512633316117017111 0ustar frankfrank#include "generator.ih" void Generator::polymorphic(ostream &out) const { if (not d_options.polymorphic()) return; key(out); out << // Tags "enum " << (d_options.strongTags() ? "class " : "") << "Tag__\n" "{\n"; for (auto &poly: d_polymorphic) out << " " << poly.first << ",\n"; out << "};\n" "\n"; ifstream in; Exception::open(in, d_options.polymorphicSkeleton()); filter(in, out, false); } bisonc++-4.13.01/generator/polymorphicspecializations.cc0000644000175000017500000000115412633316117022237 0ustar frankfrank#include "generator.ih" void Generator::polymorphicSpecializations(ostream &out) const { key(out); for (auto &poly: d_polymorphic) out << " template <>\n" " struct TagOf<" << poly.second << ">\n" " {\n" " static Tag__ const tag = Tag__::" << poly.first << ";\n" " };\n" "\n"; for (auto &poly: d_polymorphic) out << " template <>\n" " struct TypeOf\n" " {\n" " typedef " << poly.second << " type;\n" " };\n" "\n"; } bisonc++-4.13.01/generator/atnamespacedclassname.cc0000644000175000017500000000017312633316117021064 0ustar frankfrank#include "generator.ih" std::string const &Generator::atNameSpacedClassname() const { return d_nameSpacedClassname; } bisonc++-4.13.01/generator/tokens.cc0000644000175000017500000000070012633316117016047 0ustar frankfrank#include "generator.ih" void Generator::tokens(ostream &out) const { Terminal::ConstVector tokens; for (auto terminal: d_rules.terminals()) selectSymbolic(terminal, tokens); key(out); if (!tokens.size()) { out << "// No symbolic tokens were defined\n"; return; } sort(tokens.begin(), tokens.end(), Terminal::compareValues); d_writer.useStream(out); d_writer.insert(tokens); } bisonc++-4.13.01/generator/generator.h0000644000175000017500000001304112633316117016376 0ustar frankfrank#ifndef _INCLUDED_GENERATOR_ #define _INCLUDED_GENERATOR_ #include #include #include #include #include #include #include "../writer/writer.h" class Terminal; class Rules; class Options; class Generator { typedef void (Generator::*Inserter)(std::ostream &) const; typedef std::unordered_map Map; typedef Map::value_type MapValue; typedef Map::const_iterator MapConstIter; FBB::Arg &d_arg; mutable FBB::Stat d_stat; Rules const &d_rules; Options &d_options; std::string d_baseClassScope; std::string const &d_nameSpace; std::string const &d_matchedTextFunction; std::string const &d_tokenFunction; std::string d_nameSpacedClassname; mutable std::string d_key; // extracted at $insert statements mutable size_t d_indent; mutable std::string d_line; bool d_debug; bool d_printTokens; std::unordered_map const &d_polymorphic; mutable Writer d_writer; // maintains its own const-ness static Map s_insert; static char const *s_atFlag; // \@ flag in skeletons struct At; typedef std::vector AtVector; struct AtBool; typedef std::vector AtBoolVector; static AtBoolVector s_atBol; // no typo: Bol = Begin of line static AtVector s_at; public: Generator(Rules const &rules, std::unordered_map const &polymorphic); bool conflicts() const; void baseClassHeader() const; void classHeader() const; void implementationHeader() const; void parseFunction() const; private: static std::string filename(std::string const &path); void filter(std::istream &in, std::ostream &out, bool header = true) const; void insert(std::ostream &out) const; void key(std::ostream &out) const; // show which $insert is // called, just before the // generated code bool errExisting(std::string const &fileName, std::string const &option, std::string const ®ex) const; bool grep(std::string const &fileName, std::string const ®ex) const; void actionCases(std::ostream &out) const; void baseClass(std::ostream &out) const; void classH(std::ostream &out) const; void classIH(std::ostream &out) const; void debug(std::ostream &out) const; void debugIncludes(std::ostream &out) const; void debugFunctions(std::ostream &out) const; void debugInit(std::ostream &out) const; void debugDecl(std::ostream &out) const; void debugLookup(std::ostream &out) const; void defaultActionReturn(std::ostream &out) const; void errorVerbose(std::ostream &out) const; void lex(std::ostream &out) const; void ltype(std::ostream &out) const; void ltypeData(std::ostream &out) const; void ltypePop(std::ostream &out) const; void ltypePush(std::ostream &out) const; void ltypeResize(std::ostream &out) const; void ltypeStack(std::ostream &out) const; void namespaceClose(std::ostream &out) const; void namespaceOpen(std::ostream &out) const; void namespaceUse(std::ostream &out) const; void polymorphic(std::ostream &out) const; void polymorphicInline(std::ostream &out) const; void polymorphicSpecializations(std::ostream &out) const; void preIncludes(std::ostream &out) const; void print(std::ostream &out) const; void requiredTokens(std::ostream &out) const; void scannerH(std::ostream &out) const; void scannerObject(std::ostream &out) const; void staticData(std::ostream &out) const; void stype(std::ostream &out) const; void threading(std::ostream &out) const; void tokens(std::ostream &out) const; void ifInsertStype(bool &accept) const; void ifPrintTokens(bool &accept) const; void ifLtype(bool &accept) const; void ifThreadSafe(bool &accept) const; void atElse(bool &accept) const; void atEnd(bool &accept) const; std::string const &atTokenFunction() const; std::string const &atMatchedTextFunction() const; std::string const &atLtype() const; std::string const &atNameSpacedClassname() const; std::string const &atClassname() const; void replaceBaseFlag(std::string &line) const; void replaceAtKey(std::string &line, size_t pos) const; static void selectSymbolic(Terminal const *terminal, Terminal::ConstVector &symbolicTokens); static void replace(std::string &str, char ch, std::string const &replacement); void insert(std::ostream &out, size_t indent, char const *skel) const; void bolAt(std::ostream &out, std::string &line, std::istream &in, bool &accept) const; template typename std::vector::const_iterator find( std::string const &line, size_t pos, std::vector const &atVector ) const; }; #endif bisonc++-4.13.01/generator/namespaceopen.cc0000644000175000017500000000033312633316117017364 0ustar frankfrank#include "generator.ih" void Generator::namespaceOpen(std::ostream &out) const { if (d_nameSpace.size()) { key(out); out << "namespace " << d_nameSpace << "\n" "{\n"; } } bisonc++-4.13.01/generator/ltyperesize.cc0000644000175000017500000000030112633316117017120 0ustar frankfrank#include "generator.ih" void Generator::ltypeResize(ostream &out) const { if (!d_options.lspNeeded()) return; key(out); out << "d_locationStack__.resize(newSize);\n"; } bisonc++-4.13.01/generator/classih.cc0000644000175000017500000000027212633316117016176 0ustar frankfrank#include "generator.ih" void Generator::classIH(ostream &out) const { key(out); out << "#include \"" << filename(d_options.implementationHeader()) << "\"\n"; } bisonc++-4.13.01/generator/namespaceuse.cc0000644000175000017500000000064112633316117017221 0ustar frankfrank#include "generator.ih" void Generator::namespaceUse(std::ostream &out) const { if (d_nameSpace.empty()) return; key(out); out << " // UN-comment the next using-declaration if you want to use\n" " // symbols from the namespace " << d_nameSpace << " without specifying " << d_nameSpace << "::\n" "//using namespace " << d_nameSpace << ";\n"; } bisonc++-4.13.01/generator/filename.cc0000644000175000017500000000032312633316117016325 0ustar frankfrank#include "generator.ih" string Generator::filename(std::string const &path) { size_t idx = path.rfind('/'); string ret(idx == string::npos ? path : path.substr(idx + 1)); return ret; } bisonc++-4.13.01/generator/debugdecl.cc0000644000175000017500000000027012633316117016464 0ustar frankfrank#include "generator.ih" void Generator::debugDecl(std::ostream &out) const { if (!d_debug && !d_printTokens) return; key(out); insert(out, 8, "debugdecl.in"); } bisonc++-4.13.01/generator/ltypedata.cc0000644000175000017500000000026012633316117016534 0ustar frankfrank#include "generator.ih" void Generator::ltypeData(ostream &out) const { if (!d_options.lspNeeded()) return; key(out); insert(out, 8, "ltypedata.in"); } bisonc++-4.13.01/generator/ifthreadsafe.cc0000644000175000017500000000017312633316117017175 0ustar frankfrank#include "generator.ih" void Generator::ifThreadSafe(bool &accept) const { accept = d_arg.option(0, "thread-safe"); } bisonc++-4.13.01/generator/replace.cc0000644000175000017500000000047512633316117016170 0ustar frankfrank#include "generator.ih" void Generator::replace(string &str, char ch, string const &replacement) { size_t pos = 0; while (true) { pos = str.find(pos, ch); if (pos == string::npos) return; str.replace(pos, 1, replacement); pos += replacement.length(); } } bisonc++-4.13.01/generator/errorverbose.cc0000644000175000017500000000027212633316117017267 0ustar frankfrank#include "generator.ih" void Generator::errorVerbose(ostream &out) const { if (!d_arg.option(0, "error-verbose")) return; key(out); out << "errorVerbose__();\n"; } bisonc++-4.13.01/generator/parsefunction.cc0000644000175000017500000000140612633316117017430 0ustar frankfrank#include "generator.ih" // The parser's parse() member defines the S/R parsing algorithm as well as // the parsing tables. // Writing the parse() member may be prevented by the --no-parse-member // option. Otherwise, it's rewritten each time bisonc++ is called to process a // grammar. Providing the --no-parse-member option should not be // necessary, as it contains the essential information about the // grammar. Calling bisonc++ with the --no-parse-member basically renders the // call useless. void Generator::parseFunction() const { if (d_arg.option(0, "no-parse-member")) return; ofstream out; ifstream in; Exception::open(in, d_options.parseSkeleton()); Exception::open(out, d_options.parseSource()); filter(in, out); } bisonc++-4.13.01/generator/selectsymbolic.cc0000644000175000017500000000064712633316117017577 0ustar frankfrank#include "generator.ih" void Generator::selectSymbolic(Terminal const *terminal, Terminal::ConstVector &symbolicTokens) { if ( ( terminal->isSymbolic() || (terminal->isUsed() && terminal->isUndetermined()) ) && terminal->value() > Rules::eofTerminal()->value() ) symbolicTokens.push_back(terminal); } bisonc++-4.13.01/generator/debuglookup.cc0000644000175000017500000000025212633316117017066 0ustar frankfrank#include "generator.ih" void Generator::debugLookup(std::ostream &out) const { if (!d_debug) return; key(out); insert(out, 4, "debuglookup.in"); } bisonc++-4.13.01/generator/ltype.cc0000644000175000017500000000025312633316117015704 0ustar frankfrank#include "generator.ih" void Generator::ltype(ostream &out) const { if (!d_options.lspNeeded()) return; key(out); insert(out, 4, "ltype.in"); } bisonc++-4.13.01/generator/classh.cc0000644000175000017500000000023712633316117016026 0ustar frankfrank#include "generator.ih" void Generator::classH(ostream &out) const { key(out); out << "#include \"" << filename(d_options.classHeader()) << "\"\n"; } bisonc++-4.13.01/generator/data.cc0000644000175000017500000000557412633316117015473 0ustar frankfrank#include "generator.ih" Generator::Map Generator::s_insert = { {"actioncases", &Generator::actionCases}, {"lex", &Generator::lex}, {"scanner.h", &Generator::scannerH}, {"scannerobject", &Generator::scannerObject}, {"baseclass", &Generator::baseClass}, {"class.h", &Generator::classH}, {"class.ih", &Generator::classIH}, {"debug", &Generator::debug}, {"debugdecl", &Generator::debugDecl}, {"debuginit", &Generator::debugInit}, {"debugfunctions", &Generator::debugFunctions}, {"debugincludes", &Generator::debugIncludes}, {"debuglookup", &Generator::debugLookup}, {"defaultactionreturn", &Generator::defaultActionReturn}, {"errorverbose", &Generator::errorVerbose}, {"namespace-open", &Generator::namespaceOpen}, {"namespace-close", &Generator::namespaceClose}, {"namespace-use", &Generator::namespaceUse}, {"polymorphic", &Generator::polymorphic}, {"polymorphicInline", &Generator::polymorphicInline}, {"polymorphicSpecializations", &Generator::polymorphicSpecializations}, {"preincludes", &Generator::preIncludes}, {"print", &Generator::print}, {"requiredtokens", &Generator::requiredTokens}, {"staticdata", &Generator::staticData}, {"threading", &Generator::threading}, {"tokens", &Generator::tokens}, {"LTYPE", &Generator::ltype}, {"LTYPEdata", &Generator::ltypeData}, {"LTYPEpop", &Generator::ltypePop}, {"LTYPEpush", &Generator::ltypePush}, {"LTYPEresize", &Generator::ltypeResize}, {"LTYPEstack", &Generator::ltypeStack}, {"STYPE", &Generator::stype}, }; vector Generator::s_atBol = { AtBool("@insert-stype", &Generator::ifInsertStype), AtBool("@printtokens", &Generator::ifPrintTokens), AtBool("@ltype", &Generator::ifLtype), AtBool("@thread-safe", &Generator::ifThreadSafe), AtBool("@else", &Generator::atElse), AtBool("@end", &Generator::atEnd), }; char const *Generator::s_atFlag = "\\@"; vector Generator::s_at = { At("\\@tokenfunction", &Generator::atTokenFunction), At("\\@matchedtextfunction", &Generator::atMatchedTextFunction), At("\\@ltype", &Generator::atLtype), At("\\@$", &Generator::atNameSpacedClassname), At("\\@", &Generator::atClassname), }; bisonc++-4.13.01/generator/atclassname.cc0000644000175000017500000000016112633316117017040 0ustar frankfrank#include "generator.ih" std::string const &Generator::atClassname() const { return d_options.className(); } bisonc++-4.13.01/generator/insert2.cc0000644000175000017500000000103712633316117016136 0ustar frankfrank#include "generator.ih" void Generator::insert(ostream &out, size_t indent, char const *skel) const { ifstream in; Exception::open(in, d_options.skeletonDirectory() + skel); Indent::setWidth(indent); string line; bool accept = true; while (getline(in, line)) { if (line.find('@') == 0) // @ immediately at BOL bolAt(out, line, in, accept); else if (accept) { replaceBaseFlag(line); out << FBB::indent << line << '\n'; } } } bisonc++-4.13.01/generator/polymorphicinline.cc0000644000175000017500000000042312633316117020312 0ustar frankfrank#include "generator.ih" void Generator::polymorphicInline(ostream &out) const { if (not d_options.polymorphic()) return; key(out); ifstream in; Exception::open(in, d_options.polymorphicInlineSkeleton()); filter(in, out, false); } bisonc++-4.13.01/generator/conflicts.cc0000644000175000017500000000522112633316117016533 0ustar frankfrank#include "generator.ih" bool Generator::conflicts() const { bool ret = false; emsg.noLineNr(); emsg.setLineTag(""); string const &classHeader = d_options.classHeader(); if (d_stat.set(classHeader)) { // class-name must match ret = errExisting(classHeader, "class-name", "^class " + d_options.className() + "\\b") or ret; ret = errExisting(classHeader, "baseclass-header", "^#include \"" + d_options.baseclassHeaderName() + '"') or ret; // if a namespace was provided: it must be present if (not d_options.nameSpace().empty()) ret = errExisting(classHeader, "namespace", "^namespace " + d_options.nameSpace() + "\\b") or ret; if (d_options.specified("scanner")) { // the 'scanner' include spec. must be present ret = errExisting(classHeader, "scanner", "^#include " + d_options.scannerInclude()) or ret; // the 'scanner-class-name must be present if (d_options.specified("scanner-class-name")) ret = errExisting(classHeader, "scanner-class-name", "^[[:space:]]*" + d_options.scannerClassName() + " d_scanner;") or ret; } else if (d_options.specified("scanner-class-name")) wmsg << '`' << classHeader << "': option/directive `scanner-class-name' ignored: " " option `scanner' not specified" << endl; } string const &implementationHeader = d_options.implementationHeader(); if (d_stat.set(implementationHeader)) { ret = errExisting(implementationHeader, "class-header", "^#include \"" + d_options.classHeader() + '"') or ret; ret = errExisting(implementationHeader, "class-name", "\\b" + d_options.className() + "::") or ret; if (not d_options.nameSpace().empty()) ret = errExisting(implementationHeader, "namespace", "^namespace " + d_options.nameSpace() + "\\b") or ret; string pattern = "\\b" + d_options.scannerTokenFunction() + "\\b"; replace(pattern, '(', "\\("); replace(pattern, ')', "\\)"); ret = errExisting(implementationHeader, "scanner-token-function", pattern) or ret; } return ret; } bisonc++-4.13.01/generator/grep.cc0000644000175000017500000000044512633316117015507 0ustar frankfrank#include "generator.ih" bool Generator::grep(string const &fileName, string const ®ex) const { ifstream in(fileName); Pattern pattern(regex); string line; while (getline(in, line)) { if (pattern << line) return true; } return false; } bisonc++-4.13.01/generator/generator.ih0000644000175000017500000000275212633316117016556 0ustar frankfrank#include "generator.h" #include #include #include #include #include #include #include #include #include #include "../options/options.h" #include "../rules/rules.h" #include "../terminal/terminal.h" #include "../production/production.h" namespace Global { void plainWarnings(); } extern char version[]; using namespace std; using namespace FBB; struct Generator::At { char const *key; size_t size; std::string const &(Generator::*function)() const; At(char const *keyArg = "", std::string const &(Generator::*fun)() const = 0) : key(keyArg), size(strlen(keyArg)), function(fun) {} }; struct Generator::AtBool { char const *key; size_t size; void (Generator::*function)(bool &accept) const; AtBool(char const *keyArg = "", void (Generator::*fun)(bool &) const = 0) : key(keyArg), size(strlen(keyArg)), function(fun) {} }; template typename vector::const_iterator Generator::find( string const &line, size_t pos, vector const &atVector ) const { for ( auto iter = atVector.begin(), end = atVector.end(); iter != end; ++iter ) { if (line.find(iter->key, pos) == pos) return iter; } return atVector.end(); } bisonc++-4.13.01/generator/debugfunctions.cc0000644000175000017500000000066412633316117017574 0ustar frankfrank#include "generator.ih" void Generator::debugFunctions(std::ostream &out) const { bool verbose = d_arg.option(0, "error-verbose"); if (not d_debug && not verbose && not d_printTokens) return; key(out); if (d_debug) insert(out, 0, "debugfunctions1.in"); if (d_debug || d_printTokens) insert(out, 0, "debugfunctions2.in"); if (verbose) insert(out, 0, "debugfunctions3.in"); } bisonc++-4.13.01/generator/atltype.cc0000644000175000017500000000015112633316117016226 0ustar frankfrank#include "generator.ih" std::string const &Generator::atLtype() const { return d_options.ltype(); } bisonc++-4.13.01/generator/actioncases.cc0000644000175000017500000000110112633316117017034 0ustar frankfrank#include "generator.ih" void Generator::actionCases(ostream &out) const { key(out); out << '\n'; if (d_arg.option('D')) // no decoration of the parse tree { out << setw(d_indent) << "" << "// inserting actioncases suppressed by option " "--no-decoration\n"; return; } vector const &productions = d_rules.productions(); for (auto prod: productions) Production::insertAction(prod, out, d_options.lines(), d_indent); } bisonc++-4.13.01/generator/stype.cc0000644000175000017500000000032612633316117015714 0ustar frankfrank#include "generator.ih" void Generator::stype(ostream &out) const { key(out); if (!d_options.stype().empty()) out << d_options.stype() << '\n'; else out << "typedef int STYPE__;\n"; } bisonc++-4.13.01/generator/requiredtokens.cc0000644000175000017500000000025312633316117017613 0ustar frankfrank#include "generator.ih" void Generator::requiredTokens(ostream &out) const { key(out); out << "d_requiredTokens__(" << d_options.requiredTokens() << "),\n"; } bisonc++-4.13.01/generator/frame0000644000175000017500000000006112633316117015252 0ustar frankfrank#include "generator.ih" Generator::() const { } bisonc++-4.13.01/generator/preincludes.cc0000644000175000017500000000063412633316117017067 0ustar frankfrank#include "generator.ih" void Generator::preIncludes(std::ostream &out) const { bool preInclude = not d_options.preInclude().empty(); bool polymorphic = d_options.polymorphic(); if (not preInclude && not polymorphic) return; key(out); if (polymorphic) out << "#include \n"; if (preInclude) out << "#include " << d_options.preInclude() << '\n'; } bisonc++-4.13.01/generator/generator1.cc0000644000175000017500000000125612633316117016622 0ustar frankfrank#include "generator.ih" Generator::Generator(Rules const &rules, unordered_map const &polymorphic) : d_arg(Arg::instance()), d_rules(rules), d_options(Options::instance()), d_baseClassScope(d_options.className() + "Base::"), d_nameSpace(d_options.nameSpace()), d_matchedTextFunction(d_options.scannerMatchedTextFunction()), d_tokenFunction(d_options.scannerTokenFunction()), d_nameSpacedClassname(d_nameSpace + d_options.className()), d_debug(d_options.debug()), d_printTokens(d_options.printTokens()), d_polymorphic(polymorphic), d_writer(d_baseClassScope, rules) { Global::plainWarnings(); } bisonc++-4.13.01/generator/replaceatkey.cc0000644000175000017500000000043612633316117017223 0ustar frankfrank#include "generator.ih" void Generator::replaceAtKey(string &line, size_t pos) const { for (auto &atKey: s_at) { if (line.find(atKey.key, pos) == pos) { line.replace(pos, atKey.size, (this->*atKey.function)()); return; } } } bisonc++-4.13.01/generator/ifltype.cc0000644000175000017500000000016512633316117016225 0ustar frankfrank#include "generator.ih" void Generator::ifLtype(bool &accept) const { accept = not d_options.ltype().empty(); } bisonc++-4.13.01/generator/attokenfunction.cc0000644000175000017500000000015712633316117017765 0ustar frankfrank#include "generator.ih" std::string const &Generator::atTokenFunction() const { return d_tokenFunction; } bisonc++-4.13.01/generator/classheader.cc0000644000175000017500000000112212633316117017021 0ustar frankfrank#include "generator.ih" // New members and other facilites may be added to the parser's class header // after its initial generation. // A class header is not rewritten by bisonc++ once it's available void Generator::classHeader() const { string const &classHeader = d_options.classHeader(); if (d_stat.set(classHeader)) // do not overwrite an existing return; // class header file ofstream out; ifstream in; Exception::open(in, d_options.classSkeleton()); Exception::open(out, classHeader); filter(in, out); } bisonc++-4.13.01/generator/atmatchedtextfunction.cc0000644000175000017500000000017312633316117021155 0ustar frankfrank#include "generator.ih" std::string const &Generator::atMatchedTextFunction() const { return d_matchedTextFunction; } bisonc++-4.13.01/generator/scannerh.cc0000644000175000017500000000032412633316117016347 0ustar frankfrank#include "generator.ih" void Generator::scannerH(ostream &out) const { if (d_options.scannerInclude().empty()) return; key(out); out << "#include " << d_options.scannerInclude() << '\n'; } bisonc++-4.13.01/generator/errexisting.cc0000644000175000017500000000065712633316117017122 0ustar frankfrank#include "generator.ih" bool Generator::errExisting(string const &fileName, string const &option, string const &pattern) const { if (not d_options.specified(option)) return false; if (not grep(fileName, pattern)) { emsg << '`' << fileName << "' exists, option/directive `" << option << "' ignored" << endl; return true; } return false; } bisonc++-4.13.01/generator/ifprinttokens.cc0000644000175000017500000000017412633316117017450 0ustar frankfrank#include "generator.ih" void Generator::ifPrintTokens(bool &accept) const { accept = d_arg.option(0, "printtokens"); } bisonc++-4.13.01/generator/lex.cc0000644000175000017500000000027512633316117015343 0ustar frankfrank#include "generator.ih" void Generator::lex(ostream &out) const { key(out); if (d_printTokens || not d_options.implementationHeader().empty()) insert(out, 0, "lex.in"); } bisonc++-4.13.01/generator/atelse.cc0000644000175000017500000000014112633316117016020 0ustar frankfrank#include "generator.ih" void Generator::atElse(bool &accept) const { accept = not accept; } bisonc++-4.13.01/generator/replacebaseflag.cc0000644000175000017500000000123512633316117017650 0ustar frankfrank#include "generator.ih" void Generator::replaceBaseFlag(string &line) const { // string const &className = d_options.className(); size_t pos = line.length(); while ((pos = line.rfind(s_atFlag, pos)) != string::npos) // found \@ { auto iter = find(line, pos, s_at); if (iter != s_at.end()) line.replace(pos, iter->size, (this->*iter->function)()); } // if (line.find(s_namespaceBaseFlag) == pos) // line.replace(pos, s_namespaceBaseFlagSize, // d_options.nameSpace() + className); // else // line.replace(pos, s_baseFlagSize, className); //FBB // } } bisonc++-4.13.01/generator/baseclass.cc0000644000175000017500000000024612633316117016511 0ustar frankfrank#include "generator.ih" void Generator::baseClass(ostream &out) const { key(out); out << "#include \"" << filename(d_options.baseClassHeader()) << "\"\n"; } bisonc++-4.13.01/generator/ltypepush.cc0000644000175000017500000000032512633316117016604 0ustar frankfrank#include "generator.ih" void Generator::ltypePush(ostream &out) const { if (!d_options.lspNeeded()) return; key(out); out << "*(d_lsp__ = &d_locationStack__[d_stackIdx__]) = d_loc__;\n"; } bisonc++-4.13.01/generator/filter.cc0000644000175000017500000000107612633316117016040 0ustar frankfrank#include "generator.ih" namespace { DateTime dtime(DateTime::LOCALTIME); } void Generator::filter(istream &in, ostream &out, bool header) const { Terminal::inserter(&Terminal::plainName); if (header) out << "// Generated by Bisonc++ V" << ::version << " on " << dtime.rfc2822() << '\n' << '\n'; while (getline(in, d_line)) { if (d_line.find("$insert") == 0) { insert(out); continue; } replaceBaseFlag(d_line); out << d_line << '\n'; } } bisonc++-4.13.01/generator/staticdata.cc0000644000175000017500000000051712633316117016673 0ustar frankfrank#include "generator.ih" void Generator::staticData(ostream &out) const { d_writer.useStream(out); key(out); d_writer.productions(); d_writer.srTables(); d_writer.statesArray(); if (d_debug || d_printTokens) d_writer.symbolicNames(); out << "} // anonymous namespace ends\n" "\n"; } bisonc++-4.13.01/generator/namespaceclose.cc0000644000175000017500000000024412633316117017531 0ustar frankfrank#include "generator.ih" void Generator::namespaceClose(std::ostream &out) const { if (d_nameSpace.empty()) return; key(out); out << "}\n"; } bisonc++-4.13.01/generator/debugincludes.cc0000644000175000017500000000037512633316117017371 0ustar frankfrank#include "generator.ih" void Generator::debugIncludes(ostream &out) const { bool verbose = d_arg.option(0, "error-verbose"); if (!d_debug && !verbose && !d_printTokens) return; key(out); insert(out, 0, "debugincludes.in"); } bisonc++-4.13.01/generator/key.cc0000644000175000017500000000026512633316117015342 0ustar frankfrank#include "generator.ih" void Generator::key(ostream &out) const { out << setw(d_indent) << "" << "// $insert " << d_key << '\n' << setw(d_indent) << "" << flush; } bisonc++-4.13.01/generator/print.cc0000644000175000017500000000034012633316117015700 0ustar frankfrank#include "generator.ih" void Generator::print(ostream &out) const { key(out); // _UNDETERMINED_ is also used in writer/symbolicnames.cc if (d_printTokens) insert(out, 4, "print.in"); } bisonc++-4.13.01/grammar/0000755000175000017500000000000012633316117013700 5ustar frankfrankbisonc++-4.13.01/grammar/grammar.h0000644000175000017500000000120612633316117015476 0ustar frankfrank#ifndef _INCLUDED_GRAMMAR_ #define _INCLUDED_GRAMMAR_ #include #include "../symbol/symbol.h" #include "../production/production.h" class Grammar { typedef std::set SymbolSet; SymbolSet d_derivable; // used when testing whether the start // symbol derives any sentences SymbolSet d_inspecting; // (same, see derivable.cc) public: void deriveSentence(); private: bool derivable(Symbol const *symbol); bool isDerivable(Production const *prod); bool becomesDerivable(Production const *prod); }; #endif bisonc++-4.13.01/grammar/isderivable.cc0000644000175000017500000000122312633316117016476 0ustar frankfrank#include "grammar.ih" bool Grammar::isDerivable(Production const *prod) { return find_if( prod->begin(), prod->end(), [&](Symbol const *symbol) { // not removable if: return // currently testing d_inspecting.find(symbol) != d_inspecting.end() || // or a non-removable non-terminal ( symbol->isNonTerminal() && d_derivable.find(symbol) == d_derivable.end()); } ) == prod->end(); } bisonc++-4.13.01/grammar/derivable.cc0000644000175000017500000000240012633316117016140 0ustar frankfrank#include "grammar.ih" bool Grammar::derivable(Symbol const *symbol) { if (d_inspecting.find(symbol) != d_inspecting.end()) return false; if ( symbol->isTerminal() || d_derivable.find(symbol) != d_derivable.end() // symbol is derivable ) return true; d_inspecting.insert(symbol); Production::Vector const &productions = // get all productions NonTerminal::downcast(symbol)->productions(); // of `symbol' // if there is already a derivable production // then `symbol' is derivable. bool ret = find_if( productions.begin(), productions.end(), [this](Production const *prod) { return isDerivable(prod); } ) != productions.end() || find_if( productions.begin(), productions.end(), [&](Production const *prod) { return becomesDerivable(prod); } ) != productions.end(); if (ret) d_derivable.insert(symbol); d_inspecting.erase(symbol); return ret; } bisonc++-4.13.01/grammar/derivesentence.cc0000644000175000017500000000060712633316117017215 0ustar frankfrank#include "grammar.ih" // Sentences are derived from the states following the Shift-Reduce algorithm // trying all alternative routes until the final state is somehow reached. void Grammar::deriveSentence() { if (!derivable(Rules::startSymbol())) fmsg << "Grammar's start symbol `" << Rules::startSymbol() << "' does not derive any sentence" << endl; } bisonc++-4.13.01/grammar/grammar.ih0000644000175000017500000000021612633316117015647 0ustar frankfrank#include "grammar.h" #include #include #include "../rules/rules.h" using namespace std; using namespace FBB; bisonc++-4.13.01/grammar/becomesderivable.cc0000644000175000017500000000071212633316117017502 0ustar frankfrank#include "grammar.ih" bool Grammar::becomesDerivable(Production const *prod) { // get each nonterminal, underivable symbol // try to change it into a derivable symbol // return the success of this operation return find_if( prod->begin(), prod->end(), [&](Symbol const *symbol) { return not derivable(symbol); } ) == prod->end(); } bisonc++-4.13.01/grammar/frame0000644000175000017500000000005512633316117014715 0ustar frankfrank#include "grammar.ih" Grammar::() const { } bisonc++-4.13.01/grammars/0000755000175000017500000000000012633316117014063 5ustar frankfrankbisonc++-4.13.01/grammars/conflict0000644000175000017500000000007712633316117015613 0ustar frankfrank%token i %% E: i | i | E '+' E | E '*' E ; bisonc++-4.13.01/grammars/expressions0000644000175000017500000000110412633316117016364 0ustar frankfrank%token NR IDENT %left '+' '-' %left '*' '/' %right UMIN %% lines: lines line { cout << "lines line\n"; } | { cout << "lines \n"; } ; line: content '\n' { cout << "line\n"; } ; content: expr { cout << "content - expr\n"; } | error { cout << "content - error\n" } | { cout << "content - empty\n"; } ; expr: NR | IDENT | '(' expr ')' | '-' expr %prec UMIN | expr '+' expr | expr '-' expr | expr '*' expr | expr '/' expr ; bisonc++-4.13.01/grammars/small0000644000175000017500000000022312633316117015113 0ustar frankfrank%token NR IDENT %% lines: lines line | ; line: content '\n' ; content: expr | error | ; expr: NR | IDENT ; bisonc++-4.13.01/grammars/grammar4.210000644000175000017500000000002112633316117015732 0ustar frankfrankS: CC C: cC C: d bisonc++-4.13.01/grammars/grammar4.110000644000175000017500000000005512633316117015740 0ustar frankfrankE: TD D: +TD D: T: FS S: *FS S: F: (E) F: i bisonc++-4.13.01/grammars/precedence0000644000175000017500000000030312633316117016077 0ustar frankfrank%token NR %left '+' '-' %left '*' '/' %right UMIN %% expr: expr '+' expr | expr '-' expr | expr '*' expr | expr '/' expr | '(' expr ')' | NR | '-' expr %prec UMIN ; bisonc++-4.13.01/grammars/grammar0000644000175000017500000000022712633316117015435 0ustar frankfrank%scanner scanner/scanner.h %token i %% S: L '=' R | R { cout << "Hello world\n"; } ; L: '*' R | i ; R: L ; bisonc++-4.13.01/grammars/input0000644000175000017500000000026612633316117015151 0ustar frankfrank%token NR %left '+' // %left '*' %% expr: expr '+' expr { $$ = $1 + $3; } //| // expr '*' expr // { // $$ = $1 * $3; // } | // {} NR ; bisonc++-4.13.01/grammars/grammar4.200000644000175000017500000000012112633316117015732 0ustar frankfrank%token i %% S: L '=' R | R ; L: '*' R | i ; R: L ;bisonc++-4.13.01/grammars/exprplus0000644000175000017500000000003012633316117015661 0ustar frankfrank%e 2 E: E-E E: -E E: n bisonc++-4.13.01/grammars/grammar4.190000644000175000017500000000004412633316117015746 0ustar frankfrankE: E+T E: T T: T*F T: F F: (E) F: i bisonc++-4.13.01/grammars/grammar.p2380000644000175000017500000000015012633316117016123 0ustar frankfrank%% S: 'a' A 'd' | 'b' B 'd' | 'a' B 'e' | 'b' A 'e' ; A: 'c' ; B: 'c' ; bisonc++-4.13.01/grammars/empty0000644000175000017500000000004412633316117015142 0ustar frankfrank%% S: S I | ; I: 'a' | ; bisonc++-4.13.01/grammars/ifelse0000644000175000017500000000006612633316117015257 0ustar frankfrank%token i e a %% S: i S e S | i S | a ; bisonc++-4.13.01/icmake/0000755000175000017500000000000012634751026013506 5ustar frankfrankbisonc++-4.13.01/icmake/installer0000755000175000017500000000050712634737620015437 0ustar frankfrank#!/bin/bash if [ $# -eq 0 ] ; then echo destination path, ending in /, must be provided exit 0 fi for src in `find -mindepth 1 -type d` # create missing target dirs do [ ! -e $1$src ] && mkdir -p $1$src done for file in `find -type f -or -type l` do cp -d --preserve=timestamps $file $1$file done bisonc++-4.13.01/icmake/setopt0000644000175000017500000000034112633316117014742 0ustar frankfrankstring setOpt(string install_im, string envvar) { list optvar; string ret; optvar = getenv(envvar); if (optvar[0] == "1") ret = optvar[1]; else ret = install_im; return ret; } bisonc++-4.13.01/icmake/loginstall0000644000175000017500000000156512633316117015605 0ustar frankfrank// source and dest, absolute or reachable from g_cwd, should exist. // files and links in source matching dest (if empty: all) are copied to dest // and are logged in g_log // Before they are logged, dest is created void logInstall(string src, string pattern, string dest) { list entries; int idx; chdir(g_cwd); md(dest); src += "/"; dest += "/"; if (listlen(makelist(O_DIR, src)) == 0) { printf("Warning: ", src, " not found: can't install ", src, pattern, " at ", dest, "\n"); return; } entries = findAll("f", src, pattern); for (idx = listlen(entries); idx--; ) run("cp " + src + entries[idx] + " " + dest); chdir(g_cwd); entries = findAll("l", src, pattern); for (idx = listlen(entries); idx--; ) run("cp " CPOPTS " " + src + entries[idx] + " " + dest); } bisonc++-4.13.01/icmake/logzip0000644000175000017500000000165612633316117014742 0ustar frankfrank// names may be a series of files in src, not a wildcard. // if it's empty then all files in src are used. // the files are gzipped and logged in dest. // src and dest do not have to end in / void logZip(string src, string names, string dest) { list files; int idx; string file; chdir(g_cwd); md(dest); dest += "/"; if (src != "") { if (listlen(makelist(O_DIR, src)) == 0) { printf("Warning: ", src, " not found: can't install ", src, names, " at ", dest, "\n"); return; } chdir(src); } if (names == "") files = makelist("*"); else files = strtok(names, " "); for (idx = listlen(files); idx--; ) { file = files[idx]; run("gzip -n -9 < " + file + " > " + file + ".gz"); } run("tar cf - *.gz | (cd " + g_cwd + "; cd " + dest + "; tar xf -)"); run("rm *.gz"); } bisonc++-4.13.01/icmake/clean0000644000175000017500000000110512633316117014505 0ustar frankfrankvoid clean(int dist) { run("rm -rf " "build-stamp configure-stamp " "options/SKEL " "tmp/*.o" + " o */o release.yo tmp/lib*.a " "parser/grammar.output" ); if (dist) run("rm -rf tmp bisonc++.ih.gch */*.ih.gch" ); chdir("documentation"); run("rm -rf " "man/*.1 " "man/*.3* " "man/*.html " "manual/*.html " "manual/invoking/usage " "manual/invoking/usage.txt " "usage/usage " ); exit(0); } bisonc++-4.13.01/icmake/logfile0000644000175000017500000000025712633316117015053 0ustar frankfrankvoid logFile(string srcdir, string src, string destdir, string dest) { chdir(g_cwd); md(destdir); run("cp " + srcdir + "/" + src + " " + destdir + "/" + dest); } bisonc++-4.13.01/icmake/findall0000644000175000017500000000107412633316117015041 0ustar frankfrank// assuming we're in g_cwd, all entries of type 'type' matching source/pattern // are returned w/o final \n list findAll(string type, string source, string pattern) { string cmd; list entries; list ret; int idx; chdir(source); cmd = "find ./ -mindepth 1 -maxdepth 1 -type " + type; if (pattern != "") pattern = "-name '" + pattern + "'"; entries = backtick(cmd + " " + pattern + " -printf \"%f\\n\""); for (idx = listlen(entries); idx--; ) ret += (list)cutEoln(entries[idx]); chdir(g_cwd); return ret; } bisonc++-4.13.01/icmake/manpage0000644000175000017500000000045512633316117015042 0ustar frankfrank#define MANPAGE "../../tmp/man/" ${PROGRAM} ".1" #define MANHTML "../../tmp/manhtml/" ${PROGRAM} "man.html" void manpage() { md("tmp/man tmp/manhtml"); chdir("documentation/man"); run("yodl2man -o " MANPAGE " " PROGRAM); run("yodl2html -o " MANHTML " " PROGRAM); exit(0); } bisonc++-4.13.01/icmake/logrecursive0000644000175000017500000000111112633316117016131 0ustar frankfrank// log recursively all files and directories in src as entries in dest // dest is created if necessary void logRecursive(string src, string dest) { list dirs; string next; int idx; chdir(g_cwd); if (!exists(src)) { printf("skipping unavailable directory `", src, "'\n"); return; } dirs = findAll("d", src, ""); // find all subdirs for (idx = listlen(dirs); idx--; ) // visit all subdirs { next = "/" + dirs[idx]; logRecursive(src + next, dest + next); } logInstall(src, "", dest); } bisonc++-4.13.01/icmake/cuteoln0000644000175000017500000000011312634751026015075 0ustar frankfrankstring cutEoln(string text) { return resize(text, strlen(text) - 1); } bisonc++-4.13.01/icmake/uninstall0000644000175000017500000000045612633316117015444 0ustar frankfrankvoid uninstall(string logfile) { int idx; list entry; string dir; list line; if (!exists(logfile)) { printf("installation log file " + logfile + " not found\n"); exit(0); } run("icmake/remove " + logfile + " " + (string)g_echo); exit(0); } bisonc++-4.13.01/icmake/pathfile0000644000175000017500000000055212633316117015224 0ustar frankfranklist path_file(string path) { list ret; int len; int idx; for (len = strlen(path), idx = len; idx--; ) { if (path[idx] == "/") { ret = (list)substr(path, 0, idx) + (list)substr(path, idx + 1, len); return ret; } } ret = (list)"" + (list)path; return ret; } bisonc++-4.13.01/icmake/destinstall0000644000175000017500000000031412634747621015763 0ustar frankfrankvoid destInstall(string dest, string base) { chdir(g_cwd + base); // go to the base directory run(g_cwd + "icmake/installer " + dest + "/ "); // install the files } bisonc++-4.13.01/icmake/github0000644000175000017500000000025212633316117014707 0ustar frankfrankvoid github() { run("cp -r release.yo tmp/manhtml/bisonc++man.html tmp/manual " "../../wip"); run("cp changelog ../../wip/changelog.txt"); exit(0); } bisonc++-4.13.01/icmake/manual0000644000175000017500000000133312633316117014703 0ustar frankfrank#define HTMLDEST "../../tmp/manual" void manual() { list files; string file; string cwd; int idx; string compiler; compiler = setOpt(CXX, "CXX"); cwd = chdir("."); md("tmp/manual/examples"); md("tmp/manual/essence"); chdir("documentation"); if (!exists("usage/usage")) { chdir("usage"); run(compiler + " -o usage usage.cc"); run("./usage > ../manual/invoking/usage.txt"); chdir(".."); } chdir("manual"); run("yodl2html -l3 bisonc++.yo"); run("mv *.html " HTMLDEST); run("cp -r grammar/poly " HTMLDEST); run("cp -r grammar/essence " HTMLDEST); run("cp -r examples/rpn/* " HTMLDEST "/examples"); exit(0); } bisonc++-4.13.01/icmake/md0000644000175000017500000000074012633316117014027 0ustar frankfrank// md: target should be a series of blank-delimited directories to be created // If an element is a whildcard, the directory will always be created, // using mkdir -p. // // uses: run() void md(string target) { int idx; list paths; string dir; if (!exists(target)) run("mkdir -p " + target); else if (((int)stat(target)[0] & S_IFDIR) == 0) { printf(target + " exists, but is not a directory\n"); exit(1); } } bisonc++-4.13.01/icmake/log0000755000175000017500000000063212633316117014213 0ustar frankfrank#!/bin/bash find tmp/install -type f -exec md5sum "{}" \; | sed 's|tmp/install|'$1'|' > $2 find tmp/install -type l -exec printf "link %s\n" "{}" \; | sed 's|tmp/install|'$1'|' >> $2 find tmp/install -type d -exec printf "dir %s\n" "{}" \; | sed 's|tmp/install|'$1'|' >> $2 bisonc++-4.13.01/icmake/remove0000755000175000017500000000117012633316117014725 0ustar frankfrank#!/bin/bash g_echo=$2 rm_f() { [ $g_echo -ne 0 ] && echo rm $1 rm -f $1 } rm_dir() { [ $g_echo -ne 0 ] && echo rmdir $1 rmdir --ignore-fail-on-non-empty -p $1 } IFS=" " for line in `cat $1` do field1=`echo $line | awk '{printf $1}'` field2=`echo $line | awk '{printf $2}'` if [ $field1 == "link" ] ; then rm_f $field2 elif [ $field1 == "dir" ] ; then rm_dir $field2 elif [ -e "$field2" ] ; then if [ "$field1" != "`md5sum $field2 | awk '{printf $1}'`" ] ; then echo $field2 changed, not removed else rm_f $field2 fi fi done rm_f $1 bisonc++-4.13.01/icmake/precompileheaders0000644000175000017500000000133612633316117017124 0ustar frankfrankstring g_compiler; int g_gch = 1; void _precompile(string class) { string classIH; classIH = class + ".ih"; if (classIH younger class + ".ih.gch") run(g_compiler + " -x c++-header " + classIH); } void precompileHeaders() { int idx; list classes; string class; if (!g_gch) return; classes = makelist(O_SUBDIR, "*"); g_compiler = setOpt(CXX, "CXX") + " " + setOpt(CXXFLAGS, "CXXFLAGS") + " "; _precompile("main"); // precompile the main program .ih file for (idx = listlen(classes); idx--; ) { class = classes[idx]; chdir(class); _precompile(class); chdir(g_cwd); } } bisonc++-4.13.01/icmake/special0000644000175000017500000000061012633316117015043 0ustar frankfrankvoid special() { if ("INSTALL.im" newer "options/SKEL") run("echo \"#define _Skel_ \\\"" SKEL + "\\\"\" > options/SKEL"); if (! exists("release.yo") || "VERSION" newer "release.yo") { system("touch version.cc"); run("gcc -E VERSION.h | grep -v '#' | sed 's/\\\"//g' > " "release.yo"); } } bisonc++-4.13.01/icmake/run0000644000175000017500000000015112633316117014227 0ustar frankfrankvoid run(string cmd) { if (g_echo == OFF) cmd += "> /dev/null 2>&1"; system(0, cmd); } bisonc++-4.13.01/icmake/install0000644000175000017500000000741112634750723015105 0ustar frankfrank void install(string request, string dest) { string target; int components = 0; list pathsplit; string base; base = "tmp/install/"; md(base); if (request == "x") components = 63; else { if (strfind(request, "a") != -1) // additional documentation components |= 1; if (strfind(request, "b") != -1) // binary program components |= 2; if (strfind(request, "d") != -1) // standard documentation components |= 4; if (strfind(request, "m") != -1) // man-pages components |= 8; if (strfind(request, "s") != -1) // skeleton components |= 16; if (strfind(request, "u") != -1) // user guide components |= 32; } chdir(g_cwd); // determine the destination path if (dest == "") dest = "/"; else md(dest); dest = cutEoln(backtick("readlink -f " + dest)[0]); if (g_logPath != "") backtick("icmake/log " + dest + " " + g_logPath); if (components & 1) { printf("\n installing additional documentation\n"); target = base + "a/" ADD + "/bison-docs/"; printf(" installing original bison's docs below `", target + "'\n"); logRecursive("documentation/html", target); target = base + "a/" ADD + "/examples/"; printf(" installing examples below `", target + "'\n"); logRecursive("documentation/examples", target); logRecursive("documentation/man/calculator", target + "/calculator/"); printf(" installing regression tests below `", target + "'\n"); logRecursive("documentation/regression", target + "/regression/"); printf(" installing polymorphic semval demo at `", target + "poly/'\n"); logRecursive("tmp/manual/poly", target + "poly/"); printf(" installing polymorphic impl. demo at `", target + "/essence/'\n"); logRecursive("tmp/manual/essence", target + "/essence/"); chdir(base); // go to the base directory destInstall(dest, base + "a"); } if (components & 2) { target = base + "b/" BINARY; pathsplit = path_file(target); printf(" installing the executable `", target, "'\n"); logFile("tmp/bin", "binary", pathsplit[0], pathsplit[1]); destInstall(dest, base + "b"); } if (components & (4 | 8)) { target = base + "d/" DOC "/"; if (components & 4) { printf(" installing the changelog at `", target, "\n"); logZip("", "changelog", target ); destInstall(dest, base + "d"); } if (components & 8) { printf(" installing the html-manual pages at `", target, "\n"); logInstall("tmp/manhtml", "", target); destInstall(dest, base + "d"); } } if (components & 8) { target = base + "m/" MAN "/"; printf(" installing the manual pages below `", target, "'\n"); logZip("tmp/man", "bisonc++.1", target); destInstall(dest, base + "m"); } if (components & 16) { target = base + "s/" SKEL "/"; printf(" installing skeleton files at `" + target + "'\n"); logInstall("skeletons", "", target); destInstall(dest, base + "s"); } if (components & 32) { target = base + "u/" UGUIDE "/"; printf(" installing the user-guide at `", target, "'\n"); logInstall("tmp/manual", "", target); destInstall(dest, base + "u"); } printf("\n Installation completed\n"); exit(0); } bisonc++-4.13.01/icmake/adddir0000644000175000017500000000112512633316117014654 0ustar frankfranklist addDir(list dir, string entry) { list ret; int idx; int keep = 1; string elem; for (idx = listlen(dir); idx--; ) { elem = dir[idx]; if (strfind(entry, elem) != -1) // entry contains dir, ignore dir ret += (list)entry; else if (strfind(elem, entry) != -1) // dir contains entry { ret += (list)elem; keep = 0; } else // new unique entry, keep dir[idx] ret += (list)elem; } if (keep) ret += (list)entry; return ret; } bisonc++-4.13.01/icmake/backtick0000644000175000017500000000016012633316117015176 0ustar frankfranklist backtick(string arg) { list ret; echo(OFF); ret = `arg`; echo(g_echo); return ret; } bisonc++-4.13.01/icmake/logfiles0000644000175000017500000000034212633316117015231 0ustar frankfrankvoid logFiles(string source, string dest) { list files; int idx; echo(OFF); files = `"(chdir " + source + ";find ./ -type f)"`; for (idx = listlen(files); idx--; ) log(dest + "/" + files[idx]); } bisonc++-4.13.01/icmconf0000644000175000017500000000220312633316117013610 0ustar frankfrank#include "INSTALL.im" // Inspect the following #defines. Change them to taste. If you don't // need a particular option, change its value into an empty string // For more information about this file: 'man 7 icmconf' #define MAIN "bisonc++.cc" #define REFRESH #define LIBRARY "modules" #define SHAREDREQ "" //#define CLS #define USE_ALL "a" #define SOURCES "*.cc" #define ADD_LIBRARIES "bobcat" #define ADD_LIBRARY_PATHS "" #define SCANNER_DIR "scanner" #define SCANGEN "flexc++" #define SCANFLAGS "" #define SCANSPEC "lexer" //#define SCANFILES "" #define SCANOUT "lex.cc" #define PARSER_DIR "parser" #define PARSGEN "bisonc++" #define PARSFLAGS "-V" #define PARSSPEC "grammar" #define PARSFILES "inc/*" #define PARSOUT "parse.cc" #define USE_ECHO ON #define TMP_DIR "tmp" #define OBJ_EXT ".o" #define USE_VERSION #define COMPILER "" #define COMPILER_OPTIONS "" #define LINKER_OPTIONS "" bisonc++-4.13.01/item/0000755000175000017500000000000012633316117013210 5ustar frankfrankbisonc++-4.13.01/item/transitsto.cc0000644000175000017500000000060312633316117015730 0ustar frankfrank#include "item.ih" // return true if the current item is the predecessor of // the next item. So, their rules are identical, but // the current item's dot position is one lower than the // the next item's dot position. bool Item::transitsTo(Item const &next) const { return d_dot + 1 == next.d_dot && d_production == next.d_production; } bisonc++-4.13.01/item/item3.cc0000644000175000017500000000022612633316117014540 0ustar frankfrank#include "item.ih" Item::Item(Production const *production, size_t dot) : d_production(production), d_dot(dot) { d_production->used(); } bisonc++-4.13.01/item/item.h0000644000175000017500000001142212633316117014317 0ustar frankfrank#ifndef _INCLUDED_ITEM_ #define _INCLUDED_ITEM_ #include #include "../production/production.h" #include "../lookaheadset/lookaheadset.h" // An Item represents a point in a production rule. The point is indicated by // its 'dot', which represents the position until where that particular // production rule has been recognized. class Item { friend std::ostream &operator<<(std::ostream &out, Item const &item); Production const *d_production; // Pointer to the production rule // associated with this item size_t d_dot; // The position in the production // rule until where the rule has // been recognized. Dot position 0 // is only found in the start-rule // and augmented grammar rule and // represents the position where // nothing has as yet been // recognized. static std::ostream &(Item::*s_insertPtr)(std::ostream &out) const; public: typedef std::vector Vector; typedef std::vector::const_iterator ConstIter; Item(); Item(Production const *start); // initial item, starts at the // start-production. Dot = 0, // Lookahead = EOF Item(Item const *item, size_t dot); Item(Production const *prod, size_t dot); //, LookaheadSet const &laSet); // see State::beforeDot() to read why this function is only called // when d_dot > 0 Symbol const *dotSymbol() const; // symbol at the dot (must Symbol const *dotSymbolOr0() const; // symbol at the dot or 0 Symbol const *symbolBeforeDot() const; // symbol before the dot Item incDot() const; // true: returned FirstSet // contained (now removed) // epsilon bool firstBeyondDot(FirstSet *firstSet) const; Production const *production() const; // *rhs() const Symbol const *lhs() const; bool empty() const; // if no size bool hasRightOfDot(Symbol const &symbol) const; bool operator==(Item const &other) const; bool isReducible() const; // if dot at end bool transitsTo(Item const &other) const; size_t dot() const; size_t productionSize() const; static void inserter(std::ostream &(Item::*insertPtr) (std::ostream &out) const); std::ostream &plainItem(std::ostream &out) const; std::ostream &pNrDotItem(std::ostream &out) const; Symbol const *beyondDotIsNonTerminal() const; // 0 if not, otherwise // the N terminal private: std::ostream &insert(std::ostream &out, Production const *prod) const; }; inline Symbol const *Item::dotSymbol() const // symbol at the dot (must { // exist!) return &(*d_production)[d_dot]; } inline Symbol const *Item::dotSymbolOr0() const // symbol at the dot or 0 { // if dot at end of production return d_dot == d_production->size() ? 0 : dotSymbol(); } inline Symbol const *Item::symbolBeforeDot() const // symbol before the dot { return &(*d_production)[d_dot - 1]; } inline size_t Item::dot() const { return d_dot; } inline Item Item::incDot() const { return Item(this, d_dot + 1); } inline Production const *Item::production() const { return d_production; } inline Symbol const *Item::lhs() const { return d_production->lhs(); } inline bool Item::isReducible() const // if dot at end { return d_dot == productionSize(); } inline bool Item::empty() const // if no size { return productionSize() == 0; } inline size_t Item::productionSize() const { return d_production->size(); } inline void Item::inserter(std::ostream &(Item::*insertPtr) (std::ostream &out) const) { s_insertPtr = insertPtr; } inline std::ostream &operator<<(std::ostream &out, Item const &item) { return (item.*Item::s_insertPtr)(out); // Set by static void inserter(Item::*insertPtr) // to 'plainItem' or 'pNrDotItem' } #endif bisonc++-4.13.01/item/pnrdotitem.cc0000644000175000017500000000036612633316117015711 0ustar frankfrank#include "item.ih" std::ostream &Item::pNrDotItem(std::ostream &out) const { Production const *prod = production(); if (!prod) return out; out << "[P" << prod->nr() << " " << dot() << "] "; return insert(out, prod); } bisonc++-4.13.01/item/insert.cc0000644000175000017500000000073212633316117015025 0ustar frankfrank#include "item.ih" ostream &Item::insert(ostream &out, Production const *prod) const { Terminal::inserter(&Terminal::plainName); NonTerminal::inserter(&NonTerminal::plainName); out << prod->lhs()->name() << " -> "; copy(prod->begin(), prod->begin() + dot(), ostream_iterator(out, " ")); out << " . "; copy(prod->begin() + dot(), prod->end(), ostream_iterator(out, " ")); return out; } bisonc++-4.13.01/item/beyonddotisnonterminal.cc0000644000175000017500000000041612633316117020312 0ustar frankfrank#include "item.ih" Symbol const *Item::beyondDotIsNonTerminal() const { if (d_dot < d_production->size()) { Symbol const &symbol = (*d_production)[d_dot]; if (symbol.isNonTerminal()) return &symbol; } return 0; } bisonc++-4.13.01/item/item1.cc0000644000175000017500000000035512633316117014541 0ustar frankfrank#include "item.ih" Item::Item(Production const *start) : d_production(start), d_dot(0) // In this item we haven't as yet seen any part of the // production rule. { d_production->used(); } bisonc++-4.13.01/item/firstbeyonddot.cc0000644000175000017500000000126612633316117016563 0ustar frankfrank#include "item.ih" bool Item::firstBeyondDot(FirstSet *firstSet) const { // this MUST be dot < size for (size_t dot = d_dot + 1, size = productionSize(); dot < size; ++dot) { *firstSet += (*d_production)[dot].firstSet(); if (!firstSet->hasEpsilon()) // no more elements if no epsilon in return false; // the set, return `no epsilon' firstSet->rmEpsilon(); } // at this point we've seen all elements beyond dot, and the last one // contained epsilon, so more first-elements can be added, and we return // true. An empty set will also return true. return true; } bisonc++-4.13.01/item/data.cc0000644000175000017500000000014012633316117014423 0ustar frankfrank#include "item.ih" ostream &(Item::*Item::s_insertPtr)(ostream &out) const = &Item::plainItem; bisonc++-4.13.01/item/plainitem.cc0000644000175000017500000000032312633316117015477 0ustar frankfrank#include "item.ih" ostream &Item::plainItem(ostream &out) const { Production const *prod = production(); if (!prod) return out; out << prod->nr() << ": "; return insert(out, prod); } bisonc++-4.13.01/item/item0.cc0000644000175000017500000000026712633316117014542 0ustar frankfrank#include "item.ih" Item::Item() : d_production(0), d_dot(0) // In this item we haven't as yet seen any part of the // production rule. {} bisonc++-4.13.01/item/frame0000644000175000017500000000003712633316117014225 0ustar frankfrank#include "item.ih" Item:: { } bisonc++-4.13.01/item/item2.cc0000644000175000017500000000016712633316117014543 0ustar frankfrank#include "item.ih" Item::Item(Item const *item, size_t dot) : d_production(item->d_production), d_dot(dot) {} bisonc++-4.13.01/item/operatorequal.cc0000644000175000017500000000041112633316117016376 0ustar frankfrank#include "item.ih" // `smaller' for the rule ptr means: pointing to an earlier // element in the rules-map. bool Item::operator==(Item const &other) const { return d_dot == other.d_dot && d_production == other.d_production; } bisonc++-4.13.01/item/item.ih0000644000175000017500000000030512633316117014466 0ustar frankfrank#include "item.h" #include #include #include #include #include "../firstset/firstset.h" #include "../nonterminal/nonterminal.h" using namespace std; bisonc++-4.13.01/item/hasrightofdot.cc0000644000175000017500000000044512633316117016367 0ustar frankfrank#include "item.ih" bool Item::hasRightOfDot(Symbol const &symbol) const { return d_dot < d_production->size() // dot position before the end && // and d_production->rhs(d_dot) == &symbol; // symbol is at the . position } bisonc++-4.13.01/lookaheadset/0000755000175000017500000000000012633316117014715 5ustar frankfrankbisonc++-4.13.01/lookaheadset/lookaheadset3.cc0000644000175000017500000000020312633316117017745 0ustar frankfrank#include "lookaheadset.ih" LookaheadSet::LookaheadSet(LookaheadSet const &other) : FirstSet(other), d_EOF(other.d_EOF) {} bisonc++-4.13.01/lookaheadset/lookaheadset1.cc0000644000175000017500000000013312633316117017745 0ustar frankfrank#include "lookaheadset.ih" LookaheadSet::LookaheadSet(EndStatus eof) : d_EOF(eof) {} bisonc++-4.13.01/lookaheadset/operatorgreaterequal.cc0000644000175000017500000000056212633316117021464 0ustar frankfrank#include "lookaheadset.ih" // Return true if all elements in `other' are already in *this bool LookaheadSet::operator>=(LookaheadSet const &other) const { return (hasEpsilon() || !other.hasEpsilon()) && (d_EOF == e_withEOF || other.d_EOF == e_withoutEOF) && includes(begin(), end(), other.begin(), other.end()); } bisonc++-4.13.01/lookaheadset/operatorinsert.cc0000644000175000017500000000021112633316117020276 0ustar frankfrank#include "lookaheadset.ih" ostream &operator<<(ostream &out, LookaheadSet const &lookaheadSet) { return lookaheadSet.insert(out); } bisonc++-4.13.01/lookaheadset/lookaheadset2.cc0000644000175000017500000000020612633316117017747 0ustar frankfrank#include "lookaheadset.ih" LookaheadSet::LookaheadSet(FirstSet const &firstSet) : FirstSet(firstSet), d_EOF(e_withoutEOF) {} bisonc++-4.13.01/lookaheadset/insert.cc0000644000175000017500000000040312633316117016525 0ustar frankfrank#include "lookaheadset.ih" ostream &LookaheadSet::insert(ostream &out) const { out << "{ "; copy(begin(), end(), ostream_iterator(out, " ")); if (d_EOF == e_withEOF) out << " "; out << "}"; return out; } bisonc++-4.13.01/lookaheadset/operatorsubis.cc0000644000175000017500000000064712633316117020134 0ustar frankfrank#include "lookaheadset.ih" LookaheadSet &LookaheadSet::operator-=(LookaheadSet const &other) { std::set difference; set_difference(begin(), end(), other.begin(), other.end(), inserter(difference, difference.begin())); *reinterpret_cast *>(this) = difference; if (other.d_EOF == e_withEOF) d_EOF = e_withoutEOF; return *this; } bisonc++-4.13.01/lookaheadset/lookaheadset.ih0000644000175000017500000000017012633316117017700 0ustar frankfrank#include "lookaheadset.h" #include #include #include "../rules/rules.h" using namespace std; bisonc++-4.13.01/lookaheadset/operatorsubis2.cc0000644000175000017500000000037312633316117020212 0ustar frankfrank#include "lookaheadset.ih" LookaheadSet &LookaheadSet::operator-=(Symbol const *symbol) { if (symbol == Rules::eofTerminal()) d_EOF = e_withoutEOF; else reinterpret_cast(this)->erase(symbol); return *this; } bisonc++-4.13.01/lookaheadset/operatorplusis2.cc0000644000175000017500000000025012633316117020376 0ustar frankfrank#include "lookaheadset.ih" LookaheadSet &LookaheadSet::operator+=(FirstSet const &firstSet) { *reinterpret_cast(this) += firstSet; return *this; } bisonc++-4.13.01/lookaheadset/lookaheadset.h0000644000175000017500000000514312633316117017534 0ustar frankfrank#ifndef _INCLUDED_LOOKAHEADSET_ #define _INCLUDED_LOOKAHEADSET_ #include "../firstset/firstset.h" #include "../terminal/terminal.h" class LookaheadSet: private FirstSet { // 'Elements const ptrs' are known to be 'Terminal const ptrs' public: using FirstSet::begin; // members from FirstSet made using FirstSet::end; // available for LookaheadSet enum EndStatus { e_withoutEOF, e_withEOF, }; private: EndStatus d_EOF; // end-marker in the lookahead set public: FirstSet &firstSet(); LookaheadSet(EndStatus eof = e_withoutEOF); LookaheadSet(FirstSet const &firstSet); LookaheadSet(LookaheadSet const &other); LookaheadSet &operator-=(LookaheadSet const &other); LookaheadSet &operator+=(LookaheadSet const &other); LookaheadSet &operator+=(FirstSet const &other); LookaheadSet &operator-=(Symbol const *symbol); // true if *this contains other, // i.e., true if other has NO elements not already // present in the current LookaheadSet bool operator>=(LookaheadSet const &other) const; // true if `symbol' is found in *this bool operator>=(Symbol const *symbol) const; bool operator<(LookaheadSet const &other) const; bool operator==(LookaheadSet const &other) const; std::ostream &insert(std::ostream &out) const; LookaheadSet intersection(LookaheadSet const &other) const; bool hasEOF() const; void rmEOF(); bool empty() const; size_t fullSize() const; private: LookaheadSet(Element const **begin, Element const **end); }; inline LookaheadSet::LookaheadSet(Element const **begin, Element const **end) : FirstSet(begin, end) {} inline FirstSet &LookaheadSet::firstSet() { return *this; } inline bool LookaheadSet::operator>=(Symbol const *symbol) const { return find(symbol) != end(); } inline bool LookaheadSet::operator<(LookaheadSet const &other) const { return not (*this >= other); } inline bool LookaheadSet::hasEOF() const { return d_EOF == e_withEOF; } inline void LookaheadSet::rmEOF() { d_EOF = e_withoutEOF; } inline bool LookaheadSet::empty() const { return d_EOF == e_withoutEOF && FirstSet::empty(); } inline size_t LookaheadSet::fullSize() const { return size() + (d_EOF == e_withEOF); } std::ostream &operator<<(std::ostream &out, LookaheadSet const &LookaheadSet); #endif bisonc++-4.13.01/lookaheadset/intersection.cc0000644000175000017500000000102512633316117017730 0ustar frankfrank#include "lookaheadset.ih" LookaheadSet LookaheadSet::intersection(LookaheadSet const &other) const { Element const *setResult[max(size(), other.size())]; LookaheadSet ret( setResult, set_intersection ( begin(), end(), other.begin(), other.end(), setResult ) ); ret.d_EOF = hasEOF() && other.hasEOF() ? e_withEOF : e_withoutEOF; return ret; } bisonc++-4.13.01/lookaheadset/operatorplusis.cc0000644000175000017500000000034512633316117020321 0ustar frankfrank#include "lookaheadset.ih" LookaheadSet &LookaheadSet::operator+=(LookaheadSet const &other) { *reinterpret_cast(this) += other; if (other.d_EOF == e_withEOF) d_EOF = e_withEOF; return *this; } bisonc++-4.13.01/next/0000755000175000017500000000000012633316117013230 5ustar frankfrankbisonc++-4.13.01/next/transitionkernel.cc0000644000175000017500000000111712633316117017132 0ustar frankfrank#include "next.ih" ostream &Next::transitionKernel(ostream &out) const { checkRemoved(out); Terminal::inserter(&Terminal::plainName); NonTerminal::inserter(&NonTerminal::plainName); // out << "On " << pSymbol() << " to state " << out << " On "; Symbol const *ps = pSymbol(); if (ps) out << ps; else out << "????"; out << " to state " << d_next << " with ("; // static_cast(d_next) << " with ("; copy(d_kernel.begin(), d_kernel.end(), ostream_iterator(out, " ")); return out << ")"; } bisonc++-4.13.01/next/buildkernel.cc0000644000175000017500000000037612633316117016045 0ustar frankfrank#include "next.ih" void Next::buildKernel(Item::Vector *kernel, StateItem::Vector const &stateItem) { for (size_t idx = 0; idx < d_kernel.size(); ++idx) kernel->push_back(stateItem[d_kernel[idx]].item().incDot()); } bisonc++-4.13.01/next/checkremoved.cc0000644000175000017500000000060512633316117016177 0ustar frankfrank#include "next.ih" void Next::checkRemoved(ostream &out) const { if (d_symbol != 0) out << ": "; else // symbols may be removed by the SRConflict { // resolution process. if (d_forced) out << " (AUTO REMOVED by S/R resolution): "; else out << " (removed by precedence): "; } } bisonc++-4.13.01/next/transition.cc0000644000175000017500000000166112633316117015735 0ustar frankfrank#include "next.ih" // ----------------------------------------- // d_removed // ----------------------------- // d_symbol 0 not 0 // ----------------------------------------- // 0 item item removed by // removed SR resolution // in favor of // Reduce // not 0 item active does not happen // ----------------------------------------- std::ostream &Next::transition(std::ostream &out) const { checkRemoved(out); out << " On "; Symbol const *ps = pSymbol(); if (ps) out << ps; else out << "????"; out << " to state " << static_cast(d_next); return out; // // return out << " On " ////<< pSymbol() //<< " to state " << // static_cast(d_next); } bisonc++-4.13.01/next/next.ih0000644000175000017500000000022012633316117014522 0ustar frankfrank#include "next.h" #include #include #include #include "../nonterminal/nonterminal.h" using namespace std; bisonc++-4.13.01/next/next.h0000644000175000017500000001034412633316117014361 0ustar frankfrank#ifndef _INCLUDED_NEXT_ #define _INCLUDED_NEXT_ #include #include #include #include "../enumsolution/enumsolution.h" #include "../statetype/statetype.h" #include "../symbol/symbol.h" #include "../item/item.h" #include "../stateitem/stateitem.h" // The state to transit to on a given terminal symbol. Refer to // README.states-and-conflicts for a more detailed description of the class // Next. class Next { public: typedef Enum::Solution Solution; typedef std::vector Vector; typedef Vector::iterator Iterator; typedef Vector::const_iterator ConstIter; private: friend std::ostream &operator<<(std::ostream &out, Next const &next); Symbol const *d_symbol; // on this symbol we transit to state // d_next. Symbol const *d_removed; // if d_symbol == 0 and d_removed // isn't then this transition has been // removed, and d_removed holds the // original d_symbol value bool d_forced; // forced removal of shift transition // due to undefined precedence of a // production rule size_t d_next; // the index of the state to transit // to on d_symbol. std::vector d_kernel; // d_kernel[idx] contains the index of // an item in the `parent' state // (i.e., the state transiting to // state d_next on d_symbol) to // d_next's (kernel) item idx. So, the // number of kernel items in state // d_next is equal to d_kernel.size(). StateType d_stateType; // the type of the state static std::ostream &(Next::*s_insertPtr)(std::ostream &out) const; public: Next(); Next(Symbol const *symbol, size_t stateItemOffset); Solution solveByAssociation() const; Solution solveByPrecedence(Symbol const *productionSymbol) const; Symbol const *symbol() const; size_t next() const; std::vector const &kernel() const; void buildKernel(Item::Vector *kernel, StateItem::Vector const &stateItem); void setNext(size_t next); std::ostream &transition(std::ostream &out) const; std::ostream &transitionKernel(std::ostream &out) const; void checkRemoved(std::ostream &out) const; Symbol const *pSymbol() const; static size_t addToKernel(Vector &next, Symbol const *symbol, size_t stateItemOffset); bool hasSymbol(Symbol const *symbol) const; bool inLAset(LookaheadSet const &laSet) const; static void removeShift(RmShift const &rmShift, Vector &nextVector, size_t *nRemoved); static void inserter(std::ostream &(Next::*insertPtr) (std::ostream &out) const); }; inline std::vector const &Next::kernel() const { return d_kernel; } inline Symbol const *Next::symbol() const { return d_symbol; } inline bool Next::hasSymbol(Symbol const *symbol) const { return d_symbol == symbol; } inline Symbol const *Next::pSymbol() const { return d_symbol ? d_symbol : d_removed; } inline size_t Next::next() const { return d_next; } inline void Next::setNext(size_t next) { d_next = next; } inline bool Next::inLAset(LookaheadSet const &laSet) const { return laSet >= d_symbol; } inline void Next::inserter(std::ostream &(Next::*insertPtr) (std::ostream &out) const) { s_insertPtr = insertPtr; } inline std::ostream &operator<<(std::ostream &out, Next const &next) { return (next.*Next::s_insertPtr)(out); // Set by static void inserter(Next::*insertPtr) // to 'transition' or 'transitionKernel' } #endif bisonc++-4.13.01/next/next2.cc0000644000175000017500000000036312633316117014601 0ustar frankfrank#include "next.ih" Next::Next(Symbol const *symbol, size_t stateItemOffset) : d_symbol(symbol), d_removed(symbol), d_forced(false), d_next(string::npos), d_kernel(1, stateItemOffset), d_stateType(StateType::NORMAL) {} bisonc++-4.13.01/next/solvebyassociation.cc0000644000175000017500000000067112633316117017463 0ustar frankfrank#include "next.ih" Next::Solution Next::solveByAssociation() const { switch (Terminal::downcast(d_symbol)->association()) { default: return Solution::UNDECIDED; case Terminal::NONASSOC: // left or nonassoc: reduce first case Terminal::LEFT: return Solution::REDUCE; case Terminal::RIGHT: // right assoc.: shift first return Solution::SHIFT; } } bisonc++-4.13.01/next/data.cc0000644000175000017500000000014112633316117014444 0ustar frankfrank#include "next.ih" ostream &(Next::*Next::s_insertPtr)(ostream &out) const = &Next::transition; bisonc++-4.13.01/next/addtokernel.cc0000644000175000017500000000070612633316117016036 0ustar frankfrank#include "next.ih" size_t Next::addToKernel(Next::Vector &next, Symbol const *symbol, size_t stateItemOffset) { Iterator it = find_if( next.begin(), next.end(), [&](Next const &next) { return next.d_symbol == symbol; } ); it->d_kernel.push_back(stateItemOffset); return it - next.begin(); } bisonc++-4.13.01/next/frame0000644000175000017500000000003712633316117014245 0ustar frankfrank#include "next.ih" Next:: { } bisonc++-4.13.01/next/next1.cc0000644000175000017500000000023312633316117014574 0ustar frankfrank#include "next.ih" Next::Next() : d_symbol(0), d_removed(0), d_forced(false), d_next(string::npos), d_stateType(StateType::NORMAL) {} bisonc++-4.13.01/next/solvebyprecedence.cc0000644000175000017500000000072712633316117017246 0ustar frankfrank#include "next.ih" Next::Solution Next::solveByPrecedence(Symbol const *productionSymbol) const { switch (Terminal::comparePrecedence(d_symbol, productionSymbol)) { default: // EQUAL: return Solution::UNDECIDED; case Terminal::SMALLER: // shift precedence < prod. prec. return Solution::REDUCE; case Terminal::LARGER: // shift precedence > prod. prec. return Solution::SHIFT; } } bisonc++-4.13.01/next/removeshift.cc0000644000175000017500000000072312633316117016074 0ustar frankfrank#include "next.ih" void Next::removeShift(RmShift const &rmShift, Vector &nextVector, size_t *nRemoved) { Next &next = nextVector[rmShift.idx()]; if (next.d_symbol) // removeShift may be called multiple times. { // This prevents d_removed from being overwritten next.d_removed = next.d_symbol; next.d_forced = rmShift.forced(); next.d_symbol = 0; ++*nRemoved; } } bisonc++-4.13.01/nonterminal/0000755000175000017500000000000012633316117014600 5ustar frankfrankbisonc++-4.13.01/nonterminal/setfirst.cc0000644000175000017500000000367412633316117016764 0ustar frankfrank#include "nonterminal.ih" // For an empty production, the N gets . For non-empty // productions: add those element's first symbols, and stop if an // element has no empty production. If, at the end, the final element // has an empty production, add as well void NonTerminal::setFirst(NonTerminal *nonTerminal) { FirstSet &firstSet = nonTerminal->d_first; Production::Vector &prodVect = nonTerminal->d_production; if (!prodVect.size()) // empty production firstSet.addEpsilon(); // add epsilon to FirstSet else { bool hasEpsilon = false; // include epsilon at the end? for // visit all elements of a production ( Production::Vector::const_iterator it = prodVect.begin(); it != prodVect.end(); ++it ) { Production const &production = **it; bool epsilon = true; // epsilon in this rule's elements for ( auto symPtrIt = production.begin(); symPtrIt != production.end(); ++symPtrIt ) { firstSet += (*symPtrIt)->firstSet(); // add the element's firstSet // element without if (!(*symPtrIt)->firstSet().hasEpsilon()) { epsilon = false; break; // and done, this productionrule } } hasEpsilon |= epsilon; // once a production rule includes // epsilon, hasEpsilon remains true } if (hasEpsilon) firstSet.addEpsilon(); else firstSet.rmEpsilon(); } s_counter += firstSet.setSize(); } bisonc++-4.13.01/nonterminal/destructor.cc0000644000175000017500000000007312633316117017305 0ustar frankfrank#include "nonterminal.ih" NonTerminal::~NonTerminal() { } bisonc++-4.13.01/nonterminal/nonterminal1.cc0000644000175000017500000000021312633316117017512 0ustar frankfrank#include "nonterminal.ih" NonTerminal::NonTerminal(string const &name, string const &stype, Type type) : Symbol(name, type, stype) {} bisonc++-4.13.01/nonterminal/unused.cc0000644000175000017500000000055612633316117016420 0ustar frankfrank#include "nonterminal.ih" void NonTerminal::unused(NonTerminal const *nonTerminal) { if (!nonTerminal->isUsed()) { if (!s_unused) { Global::plainWarnings(); wmsg << "Non-terminal symbol(s) not used in productions:" << endl; s_unused = true; } wmsg << " " << nonTerminal << endl; } } bisonc++-4.13.01/nonterminal/nonterminal.ih0000644000175000017500000000027112633316117017450 0ustar frankfrank#include "nonterminal.h" #include #include #include namespace Global { void plainWarnings(); } using namespace std; using namespace FBB; bisonc++-4.13.01/nonterminal/data.cc0000644000175000017500000000044712633316117016025 0ustar frankfrank#include "nonterminal.ih" size_t NonTerminal::s_counter; size_t NonTerminal::s_number; bool NonTerminal::s_unused; bool NonTerminal::s_undefined; ostream &(NonTerminal::*NonTerminal::s_insertPtr)(ostream &out) const = &NonTerminal::plainName; bisonc++-4.13.01/nonterminal/undefined.cc0000644000175000017500000000046512633316117017055 0ustar frankfrank#include "nonterminal.ih" void NonTerminal::undefined(NonTerminal const *nonTerminal) { if (nonTerminal->isUsed() && !nonTerminal->nProductions()) { s_undefined = true; emsg << "No production rules for non-terminal `" << nonTerminal->name() << '\'' << endl; } } bisonc++-4.13.01/nonterminal/frame0000644000175000017500000000005512633316117015615 0ustar frankfrank#include "nonterminal.ih" NonTerminal:: { } bisonc++-4.13.01/nonterminal/nonterminal.h0000644000175000017500000001200012633316117017270 0ustar frankfrank#ifndef _INCLUDED_NONTERMINAL_ #define _INCLUDED_NONTERMINAL_ #include #include #include #include "../symbol/symbol.h" #include "../production/production.h" #include "../firstset/firstset.h" class NonTerminal: public Symbol { public: typedef std::vector Vector; private: Production::Vector d_production; // production rules in a vector // of ptrs to Production objects FirstSet d_first; // set of terminals that can be // encountered at this NonTerminal size_t d_nr; // the NonTerminal's number static size_t s_counter; // counts the number of symbols in first // sets. May be reset to 0 by // resetCounter() static size_t s_number; // incremented at each call of setNr() static bool s_unused; // prevents multiple unused warnings static bool s_undefined; // set to true once at least one // nonterminal is not used. static std::ostream &(NonTerminal::*s_insertPtr)(std::ostream &out) const; // pointer to the insertion function to be // used. public: NonTerminal(std::string const &name, std::string const &stype = "", Type type = NON_TERMINAL); ~NonTerminal(); Production::Vector &productions(); Production::Vector const &productions() const; size_t firstSize() const; size_t nProductions() const; std::set const &firstTerminals() const; void addEpsilon() ; void addProduction(Production *next); static NonTerminal *downcast(Symbol *sp); static NonTerminal const *downcast(Symbol const *sp); static size_t counter(); static void resetCounter(); static void setFirst(NonTerminal *nonTerminal); static void setFirstNr(size_t nr); static void setNonTerminal(NonTerminal *nonTerminal); static void setNr(NonTerminal *np); static void undefined(NonTerminal const *nonTerminal); static void unused(NonTerminal const *nonTerminal); static bool notUsed(); static bool notDefined(); static void inserter(std::ostream &(NonTerminal::*insertPtr) (std::ostream &out) const); // plain name std::ostream &plainName(std::ostream &out) const; std::ostream &nameAndFirstset(std::ostream &out) const; // the N's value std::ostream &value(std::ostream &out) const; using Symbol::value; private: virtual std::ostream &insert(std::ostream &out) const; virtual size_t v_value() const; virtual FirstSet const &v_firstSet() const; std::ostream &insName(std::ostream &out) const; }; inline bool NonTerminal::notUsed() { return s_unused; } inline bool NonTerminal::notDefined() { return s_undefined; } inline std::ostream &NonTerminal::plainName(std::ostream &out) const { return out << name(); } inline std::ostream &NonTerminal::nameAndFirstset(std::ostream &out) const { return insName(out) << d_first; } inline std::ostream &NonTerminal::value(std::ostream &out) const { return out << std::setw(3) << v_value(); } inline void NonTerminal::setFirstNr(size_t nr) { s_number = nr; } inline void NonTerminal::setNr(NonTerminal *np) { np->d_nr = s_number++; } inline size_t NonTerminal::counter() { return s_counter; } inline void NonTerminal::resetCounter() { s_counter = 0; } inline void NonTerminal::setNonTerminal(NonTerminal *nonTerminal) { nonTerminal->setType(NON_TERMINAL); } inline size_t NonTerminal::firstSize() const { return d_first.setSize(); } inline std::set const &NonTerminal::firstTerminals() const { return d_first.set(); } inline void NonTerminal::addProduction(Production *next) { d_production.push_back(next); } inline size_t NonTerminal::nProductions() const { return d_production.size(); } inline void NonTerminal::addEpsilon() { d_first.addEpsilon(); } inline Production::Vector &NonTerminal::productions() { return d_production; } inline Production::Vector const &NonTerminal::productions() const { return d_production; } inline NonTerminal *NonTerminal::downcast(Symbol *sp) { return dynamic_cast(sp); } inline NonTerminal const *NonTerminal::downcast(Symbol const *sp) { return dynamic_cast(sp); } inline void NonTerminal::inserter(std::ostream &(NonTerminal::*insertPtr) (std::ostream &out) const) { s_insertPtr = insertPtr; } // operator<< is already available through Element #endif bisonc++-4.13.01/nonterminal/v.cc0000644000175000017500000000042112633316117015351 0ustar frankfrank#include "nonterminal.ih" size_t NonTerminal::v_value() const { return d_nr; } FirstSet const &NonTerminal::v_firstSet() const { return d_first; } std::ostream &NonTerminal::insert(std::ostream &out) const { return (this->*NonTerminal::s_insertPtr)(out); } bisonc++-4.13.01/nonterminal/insname.cc0000644000175000017500000000037012633316117016541 0ustar frankfrank#include "nonterminal.ih" std::ostream &NonTerminal::insName(std::ostream &out) const { std::string const &nName = name(); return out << " " << nName << left << setw(max(10 - static_cast(nName.length()), 1)) << ": "; } bisonc++-4.13.01/options/0000755000175000017500000000000012634776615013763 5ustar frankfrankbisonc++-4.13.01/options/setaccessorvariables.cc0000644000175000017500000000026012633316117020461 0ustar frankfrank#include "options.ih" void Options::setAccessorVariables() { setBooleans(); setBasicStrings(); setPathStrings(); setQuotedStrings(); setSkeletons(); } bisonc++-4.13.01/options/options.h0000644000175000017500000002650012633316117015614 0ustar frankfrank#ifndef INCLUDED_OPTIONS_ #define INCLUDED_OPTIONS_ #include #include namespace FBB { class Arg; } class Options { enum PathType { FILENAME, PATHNAME }; FBB::Arg &d_arg; std::string const *d_matched; std::string d_fileName; // the name of the current file size_t d_lineNr; // the current line nr. bool d_debug; bool d_errorVerbose; bool d_flex; bool d_lines; bool d_lspNeeded; bool d_printTokens; bool d_polymorphic; bool d_strongTags; size_t d_requiredTokens; std::set d_warnOptions; // contains the names of options // for which Generator may warn // if specified for already // existing .h or .ih files std::string d_baseClassHeader; std::string d_baseClassSkeleton; std::string d_polymorphicInline; std::string d_polymorphicInlineSkeleton; std::string d_polymorphicSkeleton; std::string d_classHeader; std::string d_className; std::string d_classSkeleton; std::string d_genericFilename; std::string d_implementationHeader; std::string d_implementationSkeleton; std::string d_locationDecl; std::string d_nameSpace; std::string d_parsefunSkeleton; std::string d_parsefunSource; std::string d_preInclude; std::string d_scannerInclude; std::string d_scannerMatchedTextFunction; std::string d_scannerTokenFunction; std::string d_scannerClassName; std::string d_skeletonDirectory; std::string d_stackDecl; std::string d_targetDirectory; std::string d_verboseName; static char s_defaultBaseClassSkeleton[]; static char s_defaultPolymorphicInlineSkeleton[]; static char s_defaultPolymorphicSkeleton[]; static char s_defaultClassName[]; static char s_defaultClassSkeleton[]; static char s_defaultImplementationSkeleton[]; static char s_defaultParsefunSkeleton[]; static char s_defaultParsefunSource[]; static char s_defaultSkeletonDirectory[]; static char s_defaultTargetDirectory[]; static char s_defaultScannerClassName[]; static char s_defaultScannerMatchedTextFunction[]; static char s_defaultScannerTokenFunction[]; static char s_YYText[]; static char s_yylex[]; static Options *s_options; public: static Options &instance(); Options(Options const &other) = delete; void setMatched(std::string const &matched); void setAccessorVariables(); bool specified(std::string const &option) const; void setBaseClassHeader(); void setBaseClassSkeleton(); void setClassHeader(); void setClassName(); void setDebug(); void setErrorVerbose(); void setFlex(); void setGenericFilename(); void setImplementationHeader(); void setLocationDecl(std::string const &block); void setLspNeeded(); void setLtype(); void setNamespace(); void setParsefunSource(); void setPolymorphicDecl(); void setPolymorphicInlineSkeleton(); void setPolymorphicSkeleton(); void setPrintTokens(); void setPreInclude(); void setRequiredTokens(size_t nRequiredTokens); void setScannerClassName(); void setScannerInclude(); void setScannerMatchedTextFunction(); void setScannerTokenFunction(); void setSkeletonDirectory(); void setStype(); void setTargetDirectory(); void setUnionDecl(std::string const &block); void setVerbosity(); // Prepare Msg for verbose output void unsetLines(); void unsetStrongTags(); void showFilenames() const; bool printTokens() const; bool debug() const; bool errorVerbose() const; bool lines() const; bool lspNeeded() const; bool polymorphic() const; bool strongTags() const; size_t requiredTokens() const; std::string const &baseClassSkeleton() const; std::string const &baseClassHeader() const; std::string baseclassHeaderName() const; std::string const &classHeader() const; std::string const &className() const; std::string const &classSkeleton() const; std::string const &implementationHeader() const; std::string const &implementationSkeleton() const; std::string const <ype() const; std::string const &nameSpace() const; std::string const &parseSkeleton() const; std::string const &parseSource() const; std::string const &preInclude() const; std::string const &polymorphicInlineSkeleton() const; std::string const &polymorphicSkeleton() const; std::string const &scannerClassName() const; std::string const &scannerInclude() const; std::string const &scannerMatchedTextFunction() const; std::string const &scannerTokenFunction() const; std::string const &skeletonDirectory() const; std::string const &stype() const; static std::string undelimit(std::string const &str); private: Options(); // called by setAccessorVariables() void setBooleans(); void setBasicStrings(); void setOpt(std::string *destVar, char const *opt, std::string const &defaultSpec); void setQuotedStrings(); void setPathStrings(); // called by setAccessorVariables, // called by parser.cleanup(). // inspected Option values // may NOT have directory separators. void setSkeletons(); // undelimit and if append append / if missing void cleanDir(std::string &dir, bool append); void addIncludeQuotes(std::string &target); std::string const &accept(PathType pathType, char const *declTxt); void assign(std::string *target, PathType pathType, char const *declTxt); void setPath(std::string *dest, int optChar, std::string const &defaultFilename, char const *defaultSuffix, char const *optionName); bool isFirstStypeDefinition() const; }; inline bool Options::debug() const { return d_debug; } inline std::string const &Options::baseClassHeader() const { return d_baseClassHeader; } inline std::string const &Options::skeletonDirectory() const { return d_skeletonDirectory; } inline std::string const &Options::baseClassSkeleton() const { return d_baseClassSkeleton; } inline std::string const &Options::classHeader() const { return d_classHeader; } inline std::string const &Options::className() const { return d_className; } inline std::string const &Options::classSkeleton() const { return d_classSkeleton; } inline bool Options::errorVerbose() const { return d_errorVerbose; } inline std::string const &Options::implementationHeader() const { return d_implementationHeader; } inline std::string const &Options::implementationSkeleton() const { return d_implementationSkeleton; } inline bool Options::lines() const { return d_lines; } inline bool Options::lspNeeded() const { return d_lspNeeded; } inline bool Options::polymorphic() const { return d_polymorphic; } inline std::string const &Options::ltype() const { return d_locationDecl; } inline std::string const &Options::nameSpace() const { return d_nameSpace; } inline std::string const &Options::parseSkeleton() const { return d_parsefunSkeleton; } inline std::string const &Options::parseSource() const { return d_parsefunSource; } inline std::string const &Options::polymorphicInlineSkeleton() const { return d_polymorphicInlineSkeleton; } inline std::string const &Options::polymorphicSkeleton() const { return d_polymorphicSkeleton; } inline std::string const &Options::preInclude() const { return d_preInclude; } inline bool Options::printTokens() const { return d_printTokens; } inline size_t Options::requiredTokens() const { return d_requiredTokens; } inline std::string const &Options::scannerClassName() const { return d_scannerClassName; } inline std::string const &Options::scannerInclude() const { return d_scannerInclude; } inline std::string const &Options::scannerMatchedTextFunction() const { return d_scannerMatchedTextFunction; } inline std::string const &Options::scannerTokenFunction() const { return d_scannerTokenFunction; } inline void Options::setBaseClassHeader() { assign(&d_baseClassHeader, FILENAME, "baseclass-header"); } inline void Options::setBaseClassSkeleton() { assign(&d_baseClassSkeleton, PATHNAME, "baseclass-skeleton"); } inline void Options::setClassHeader() { assign(&d_classHeader, FILENAME, "class-header"); } inline void Options::setClassName() { assign(&d_className, FILENAME, "class-name"); } inline void Options::setErrorVerbose() { d_errorVerbose = true; } inline void Options::setDebug() { d_debug = true; } inline void Options::setGenericFilename() { assign(&d_genericFilename, FILENAME, "filenames"); } inline void Options::setFlex() { d_flex = true; } inline void Options::setImplementationHeader() { assign(&d_implementationHeader, FILENAME, "implementation-header"); } inline void Options::unsetLines() { d_lines = false; } inline void Options::setLspNeeded() { d_lspNeeded = true; } inline void Options::setMatched(std::string const &matched) { d_matched = &matched; } inline void Options::setNamespace() { assign(&d_nameSpace, FILENAME, "namespace"); } inline void Options::setParsefunSource() { assign(&d_parsefunSource, FILENAME, "parsefun-source"); } inline void Options::setPolymorphicInlineSkeleton() { assign(&d_polymorphicInlineSkeleton, PATHNAME, "polymorphic-inline-skeleton"); } inline void Options::setPolymorphicSkeleton() { assign(&d_polymorphicSkeleton, PATHNAME, "polymorphic-skeleton"); } inline void Options::setPreInclude() { assign(&d_preInclude, PATHNAME, "baseclass-preinclude"); } inline void Options::setScannerClassName() { assign(&d_scannerClassName, FILENAME, "scanner-class-name"); } inline void Options::setScannerInclude() { assign(&d_scannerInclude, PATHNAME, "scanner"); } inline void Options::setScannerMatchedTextFunction() { assign(&d_scannerMatchedTextFunction, FILENAME, "scanner-matched-text-function"); } inline void Options::setScannerTokenFunction() { assign(&d_scannerTokenFunction, FILENAME, "scanner-token-function"); } inline void Options::setSkeletonDirectory() { assign(&d_skeletonDirectory, PATHNAME, "skeleton-directory (-S)"); } inline void Options::setTargetDirectory() { assign(&d_targetDirectory, PATHNAME, "target-directory"); } inline std::string const &Options::stype() const { return d_stackDecl; } inline bool Options::strongTags() const { return d_strongTags; } inline void Options::unsetStrongTags() { d_strongTags = false; } inline bool Options::specified(std::string const &option) const { return d_warnOptions.find(option) != d_warnOptions.end(); } #endif bisonc++-4.13.01/options/baseclassheadername.cc0000644000175000017500000000043012633316117020223 0ustar frankfrank#include "options.ih" std::string Options::baseclassHeaderName() const { size_t pos = d_baseClassHeader.rfind('/'); return pos == string::npos ? d_baseClassHeader : d_baseClassHeader.substr(pos + 1); } bisonc++-4.13.01/options/instance.cc0000644000175000017500000000021312633316117016054 0ustar frankfrank#include "options.ih" Options &Options::instance() { if (s_options == 0) s_options = new Options(); return *s_options; } bisonc++-4.13.01/options/setltype.cc0000644000175000017500000000057712633316117016136 0ustar frankfrank#include "options.ih" void Options::setLtype() { if (d_locationDecl.size()) emsg << "%location-struct or %ltype multiply declared" << endl; else if (d_matched->find(';') != string::npos) emsg << "`;' in %ltype type-definition `" << *d_matched << '\'' << endl; else d_locationDecl = "typedef " + *d_matched += " LTYPE__;\n"; } bisonc++-4.13.01/options/options1.cc0000644000175000017500000000050212633316117016025 0ustar frankfrank#include "options.ih" Options::Options() : d_arg(Arg::instance()), d_debug(false), d_errorVerbose(false), d_flex(false), d_lines(true), d_lspNeeded(false), d_printTokens(false), d_polymorphic(false), d_strongTags(true), d_requiredTokens(0), d_verboseName("(not requested)") {} bisonc++-4.13.01/options/setquotedstrings.cc0000644000175000017500000000033112633316117017700 0ustar frankfrank#include "options.ih" void Options::setQuotedStrings() { d_arg.option(&d_preInclude, 'H'); addIncludeQuotes(d_preInclude); d_arg.option(&d_scannerInclude, 's'); addIncludeQuotes(d_scannerInclude); } bisonc++-4.13.01/options/options.ih0000644000175000017500000000023612633316117015763 0ustar frankfrank#include "options.h" #include #include #include #include using namespace std; using namespace FBB; bisonc++-4.13.01/options/isfirststypedef.cc0000644000175000017500000000035712633316117017510 0ustar frankfrank#include "options.ih" bool Options::isFirstStypeDefinition() const { if (d_stackDecl.empty()) return true; emsg << "Only one of %polymorphic, %stype, or %union can be specified" << endl; return false; } bisonc++-4.13.01/options/setrequiredtokens.cc0000644000175000017500000000034712633316117020040 0ustar frankfrank#include "options.ih" void Options::setRequiredTokens(size_t nRequiredTokens) { if (d_requiredTokens != 0) emsg << "%required-tokens multiply specified " << endl; else d_requiredTokens = nRequiredTokens; } bisonc++-4.13.01/options/accept.cc0000644000175000017500000000041412633316117015512 0ustar frankfrank#include "options.ih" std::string const &Options::accept(PathType pathType, char const *declTxt) { if (pathType == FILENAME && d_matched->find('/') != string::npos) emsg << '`' << declTxt << "' directive: no path names" << endl; return *d_matched; } bisonc++-4.13.01/options/setlocationdecl.cc0000644000175000017500000000060412633316117017430 0ustar frankfrank#include "options.ih" // copy the location declaration into `d_locationDecl' as the // definition of LSTYPE. void Options::setLocationDecl(std::string const &block) { if (!d_locationDecl.empty()) emsg << "%location-struct or %ltype multiply specified" << endl; else { d_locationDecl = "struct LTYPE__\n" + block += ";\n"; d_lspNeeded = true; } } bisonc++-4.13.01/options/addincludequotes.cc0000644000175000017500000000044112633316117017610 0ustar frankfrank#include "options.ih" void Options::addIncludeQuotes(string &target) { if ( target.size() // target specified && target.find_first_of("<\"") != 0 // but no initial quotes ) target.insert(0, 1, '"') += '"'; } bisonc++-4.13.01/options/data.cc0000644000175000017500000000235112633316117015166 0ustar frankfrank// Recompile this file if the skeleton locations in INSTALL.im change #include "options.ih" #include "SKEL" char Options::s_defaultSkeletonDirectory[] = _Skel_; char Options::s_defaultClassName[] = "Parser"; char Options::s_defaultParsefunSource[] = "parse.cc"; char Options::s_defaultBaseClassSkeleton[] = "bisonc++base.h"; char Options::s_defaultClassSkeleton[] = "bisonc++.h"; char Options::s_defaultImplementationSkeleton[] = "bisonc++.ih"; char Options::s_defaultParsefunSkeleton[] = "bisonc++.cc"; char Options::s_defaultPolymorphicSkeleton[] = "bisonc++polymorphic"; char Options::s_defaultPolymorphicInlineSkeleton[] = "bisonc++polymorphic.inline"; // the defaults are flexc++-defaults. // use --flex or %flex or explicit options to use flex defaults char Options::s_defaultScannerClassName[] = "Scanner"; char Options::s_defaultScannerMatchedTextFunction[] = "d_scanner.matched()"; char Options::s_YYText[] = "d_scanner.YYText()"; char Options::s_defaultScannerTokenFunction[] = "d_scanner.lex()"; char Options::s_yylex[] = "d_scanner.yylex()"; Options *Options::s_options = 0; bisonc++-4.13.01/options/setopt.cc0000644000175000017500000000054712633316117015600 0ustar frankfrank#include "options.ih" void Options::setOpt(string *destVar, char const *option, string const &defaultSpec) { string str; d_arg.option(&str, option); if (not str.empty()) { d_warnOptions.insert(option); *destVar = undelimit(str); } else if (destVar->empty()) *destVar = defaultSpec; } bisonc++-4.13.01/options/setpathstrings.cc0000644000175000017500000000100612633316117017333 0ustar frankfrank#include "options.ih" void Options::setPathStrings() { setPath(&d_baseClassHeader, 'b', d_genericFilename, "base.h", "baseclass-header"); setPath(&d_classHeader, 'c', d_genericFilename, ".h", "class-header"); setPath(&d_implementationHeader, 'i', d_genericFilename, ".ih", "implementation-header"); setPath(&d_parsefunSource, 'p', s_defaultParsefunSource, "", "parsefun-source"); } bisonc++-4.13.01/options/cleandir.cc0000644000175000017500000000024312633316117016034 0ustar frankfrank#include "options.ih" void Options::cleanDir(string &dir, bool append) { dir = undelimit(dir); if (append && *dir.rbegin() != '/') dir += '/'; } bisonc++-4.13.01/options/setverbosity.cc0000644000175000017500000000064412633316117017022 0ustar frankfrank#include "options.ih" void Options::setVerbosity() { if ( d_arg.option('V') || d_arg.option(0, "construction") || d_arg.option(0, "own-debug") ) { // determine the output filename (d_verboseName = d_arg[0]) += ".output"; imsg.reset(d_verboseName); } else imsg.off(); // suppress verbose messages } bisonc++-4.13.01/options/showfilenames.cc0000644000175000017500000000163012633316117017120 0ustar frankfrank#include "options.ih" void Options::showFilenames() const { if (!d_arg.option(0, "show-filenames")) return; cout << "\n" "SKELETONS AND FILENAMES:\n" " Base class skeleton:\n" "\t`" << d_baseClassSkeleton << "'\n" " Class skeleton:\n" "\t`" << d_classSkeleton << "'\n" " Implementation header skeleton:\n" "\t`" << d_implementationSkeleton << "'\n" " Parser implementation skeleton:\n" "\t`" << d_parsefunSkeleton << "'\n" "\n" " Base class header: `" << d_baseClassHeader << "'\n" " Class header: `" << d_classHeader << "'\n" " Implementation header: `" << d_implementationHeader << "'\n" " Parser Implementation: `" << d_parsefunSource << "'\n" " Verbose grammar description: `" << d_verboseName << "'\n" "\n"; } bisonc++-4.13.01/options/setpath2.4.040000644000175000017500000001225212633316117016010 0ustar frankfrank#include "options.ih" // options overrule directives // path specifications overrule target specifications // paths in directives and a targetdir option results in a warning // -t: --target-directory options was specified // %t: %target-directory directive was specified // -d: --destination option was specified // -dp: --destination option using a path (/) was specified // %d: %destination directive was specified // %dp: %destination directive using a path (/) was specified // D: default destination value // d: destination directive value // o: destination option value // t: target-directory value // -: combination cannot occur (e.g., both -d and -dp) // W: provide warning about ignored t // No destination option: // // no target spec: // // -t %t -d -dp %d %dp use: action: // 0 0 0 0 D 1 // 0 0 0 1 d 2 // 0 0 1 0 d 2 // 0 0 1 1 - // // target spec: // // -t %t -d -dp %d %dp use: action: // 0 1 0 0 t + D 1 // 0 1 0 1 d 2 // 0 1 1 0 t + d 2 // 0 1 1 1 - // // -t %t -d -dp %d %dp use: action: // 1 0 0 0 t + D 1 // 1 0 0 1 d (W) 3 // 1 0 1 0 t + d 2 // 1 0 1 1 - // // -t %t -d -dp %d %dp use: action: // 1 1 0 0 t + D 1 // 1 1 0 1 d (W) 3 // 1 1 1 0 t + d 2 // 1 1 1 1 - // // Destination option was used: // // -t %t -d -dp %d %dp use: action: // 0 0 0 1 0 0 o 5 // 0 0 0 1 0 1 o 5 // 0 0 0 1 1 0 o 5 // 0 0 0 1 1 1 - // // -t %t -d -dp %d %dp use: action: // 0 0 1 0 0 0 o 5 // 0 0 1 0 0 1 o 5 // 0 0 1 0 1 0 o 5 // 0 0 1 0 1 1 - // // -t %t -d -dp %d %dp use: action: // 0 0 1 1 0 0 - // 0 0 1 1 0 1 - // 0 0 1 1 1 0 - // 0 0 1 1 1 1 - // // -t %t -d -dp %d %dp use: action: // 0 1 0 1 0 0 o 5 // 0 1 0 1 0 1 o 5 // 0 1 0 1 1 0 o 5 // 0 1 0 1 1 1 - // // -t %t -d -dp %d %dp use: action: // 0 1 1 0 0 0 o 5 // 0 1 1 0 0 1 o 5 // 0 1 1 0 1 0 o 5 // 0 1 1 0 1 1 - // // -t %t -d -dp %d %dp use: action: // 0 1 1 1 0 0 - // 0 1 1 1 0 1 - // 0 1 1 1 1 0 - // 0 1 1 1 1 1 - // // -t %t -d -dp %d %dp use: action: // 1 0 0 1 0 0 o 5 // 1 0 0 1 0 1 o 5 // 1 0 0 1 1 0 o 5 // 1 0 0 1 1 1 - // // -t %t -d -dp %d %dp use: action: // 1 0 1 0 0 0 t + o 4 // 1 0 1 0 0 1 t + o 4 // 1 0 1 0 1 0 t + o 4 // 1 0 1 0 1 1 - // // -t %t -d -dp %d %dp use: action: // 1 0 1 1 0 0 - // 1 0 1 1 0 1 - // 1 0 1 1 1 0 - // 1 0 1 1 1 1 - // // -t %t -d -dp %d %dp use: action: // 1 1 0 1 0 0 o 5 // 1 1 0 1 0 1 o 5 // 1 1 0 1 1 0 o 5 // 1 1 0 1 1 1 - // // -t %t -d -dp %d %dp use: action: // 1 1 1 0 0 0 t + o 4 // 1 1 1 0 0 1 t + o 4 // 1 1 1 0 1 0 t + o 4 // 1 1 1 0 1 1 - // // -t %t -d -dp %d %dp use: action: // 1 1 1 1 0 0 - // 1 1 1 1 0 1 - // 1 1 1 1 1 0 - // 1 1 1 1 1 1 - void Options::setPath(string *dest, int optChar, bool targetDirOption, char const *optionName, string const &defaultFilename, char const *suffix) { if (not d_arg.option(dest, optChar)) // no dest. option { // path specification used if (dest->find('/') != string::npos) { if (targetDirOption) // #3 wmsg << "--target-directory ignored for %" << optionName << ' ' << *dest << endl; } else *dest = d_targetDirectory + // may be empty ( dest->empty() ? // no dest. specified defaultFilename + suffix // #1 : *dest // #2 ); } else // targetDirOption + dest. option w/o path provided: { if (targetDirOption && dest->find('/') == string::npos) *dest = d_targetDirectory + *dest; // # 4 } // otherwise: // # 5 cleanDir(*dest, false); } bisonc++-4.13.01/options/setuniondecl.cc0000644000175000017500000000041312633316117016746 0ustar frankfrank#include "options.ih" // copy the union declaration into ostringstream `union_decl' as the // definition of LSTYPE. void Options::setUnionDecl(std::string const &block) { if (isFirstStypeDefinition()) d_stackDecl = "union STYPE__\n" + block + ";\n"; } bisonc++-4.13.01/options/setpath2.cc0000644000175000017500000000361612633316117016014 0ustar frankfrank#include "options.ih" // 1 2 3 4 // --target %target --dest %dest action: (D: default) // ------------------------------------------------------ // 0 0 1 0 2 t+d (t may be empty) // 0 0 1 1 2 // 0 1 1 0 2 // 0 1 1 1 2 // 1 0 1 0 2 // 1 0 1 1 2 // 1 1 1 0 2 // 1 1 1 1 2 // 2 // 0 0 0 1 2 // 0 1 0 1 2 // 1 0 0 1 2 // 1 1 0 1 2 // // 1 0 0 0 1 t+D (t may be empty) // 1 1 0 0 1 // 0 1 0 0 1 // 0 0 0 0 1 // ------------------------------------------------------- void Options::setPath(string *dest, int optChar, string const &defaultFilename, char const *defaultSuffix, char const *optionName) { if ( d_arg.option(dest, optChar) // try to get the option && dest->find('/') != string::npos ) emsg << "`--" << optionName << "' option: no path names" << endl; if (dest->empty()) // no value in dest then use a default *dest = defaultFilename + defaultSuffix; // filename and suffix // d_warnOptions.insert(optionName); *dest = d_targetDirectory + *dest; // prefix the target (may be empty) } bisonc++-4.13.01/options/undelimit.cc0000644000175000017500000000067112633316117016252 0ustar frankfrank#include "options.ih" string Options::undelimit(std::string const &str) { string ret = String::unescape( // no initial " or < then use str string("<\"").find(str[0]) == string::npos ? str : // else remove the 1st and last char str.substr(1, str.size() - 2) ); return ret; } bisonc++-4.13.01/options/setpolymorphicdecl.cc0000644000175000017500000000040312633316117020162 0ustar frankfrank#include "options.ih" // define the polymorphic semantic value type, embedded in a shared_ptr void Options::setPolymorphicDecl() { if (isFirstStypeDefinition()) d_stackDecl = " typedef Meta__::SType STYPE__;\n"; d_polymorphic = true; } bisonc++-4.13.01/options/setstype.cc0000644000175000017500000000050312633316117016132 0ustar frankfrank#include "options.ih" void Options::setStype() { if (not isFirstStypeDefinition()) return; if (d_matched->find(';') == string::npos) d_stackDecl = "typedef " + *d_matched + " STYPE__;\n"; else emsg << "`;' in %stype type-definition `" << *d_matched << '\'' << endl; } bisonc++-4.13.01/options/frame0000644000175000017500000000004512633316117014761 0ustar frankfrank#include "options.ih" Options:: { } bisonc++-4.13.01/options/setbasicstrings.cc0000644000175000017500000000335312633316117017467 0ustar frankfrank#include "options.ih" void Options::setBasicStrings() { // classname/namespace can only be followed by IDENTIFIERs, // so no undelimit is required. if (d_arg.option(&d_nameSpace, 'n')) d_warnOptions.insert("namespace"); if (d_arg.option(&d_className, "class-name")) d_warnOptions.insert("class-name"); else if (d_className.empty()) d_className = s_defaultClassName; if (d_arg.option(&d_scannerClassName, "scanner-class-name")) d_warnOptions.insert("scanner-class-name"); else if (d_scannerClassName.empty()) d_scannerClassName = s_defaultScannerClassName; setOpt(&d_scannerTokenFunction, "scanner-token-function", d_flex ? s_yylex : s_defaultScannerTokenFunction); setOpt(&d_scannerMatchedTextFunction, "scanner-matched-text-function", d_flex ? s_YYText : s_defaultScannerMatchedTextFunction); string nTokens; if (d_arg.option(&nTokens, "required-tokens")) d_requiredTokens = stoul(nTokens); d_arg.option(&d_genericFilename, 'f'); if (d_genericFilename.empty()) d_genericFilename = d_className; d_arg.option(&d_skeletonDirectory, 'S'); if (d_skeletonDirectory.empty()) d_skeletonDirectory = s_defaultSkeletonDirectory; cleanDir(d_skeletonDirectory, true); d_arg.option(&d_targetDirectory, "target-directory"); if (not d_targetDirectory.empty()) cleanDir(d_targetDirectory, true); } bisonc++-4.13.01/options/setbooleans.cc0000644000175000017500000000067212633316117016577 0ustar frankfrank#include "options.ih" void Options::setBooleans() { if (d_arg.option(0, "debug")) d_debug = true; if (d_arg.option('t')) { d_warnOptions.insert("print-tokens"); d_printTokens = true; } if (d_arg.option(0, "error-verbose")) d_errorVerbose = true; if (d_arg.option(0, "flex")) d_flex = d_scannerInclude.size(); if (d_arg.option(0, "no-lines")) d_lines = false; } bisonc++-4.13.01/options/assign.cc0000644000175000017500000000054412633316117015543 0ustar frankfrank#include "options.ih" void Options::assign(std::string *target, PathType pathType, char const *declTxt) { if (target->empty()) { d_warnOptions.insert(declTxt); *target = accept(pathType, declTxt); } else emsg << "%" << declTxt << " multiply specified" << endl; } bisonc++-4.13.01/options/setprinttokens.cc0000644000175000017500000000020012633316117017340 0ustar frankfrank#include "options.ih" void Options::setPrintTokens() { d_printTokens = true; d_warnOptions.insert("print-tokens"); } bisonc++-4.13.01/options/setskeletons.cc0000644000175000017500000000173612633316117017006 0ustar frankfrank#include "options.ih" void Options::setSkeletons() { if (!d_arg.option(&d_baseClassSkeleton, 'B')) d_baseClassSkeleton = d_skeletonDirectory + s_defaultBaseClassSkeleton; if (!d_arg.option(&d_classSkeleton, 'C')) d_classSkeleton = d_skeletonDirectory + s_defaultClassSkeleton; if (!d_arg.option(&d_implementationSkeleton, 'I')) d_implementationSkeleton = d_skeletonDirectory + s_defaultImplementationSkeleton; if (!d_arg.option(&d_parsefunSkeleton, 'P')) d_parsefunSkeleton = d_skeletonDirectory + s_defaultParsefunSkeleton; if (!d_arg.option(&d_polymorphicSkeleton, 'M')) d_polymorphicSkeleton = d_skeletonDirectory + s_defaultPolymorphicSkeleton; if (!d_arg.option(&d_polymorphicInlineSkeleton, 'm')) d_polymorphicInlineSkeleton = d_skeletonDirectory + s_defaultPolymorphicInlineSkeleton; } bisonc++-4.13.01/parseconstruction0000755000175000017500000000133412633316117015766 0ustar frankfrank#!/bin/bash if [ $# == 0 ] ; then echo ls documentation/regression echo echo provide directory name under documentation/regression to process echo the parser directory is copied to tmp/bin and parse.cc and echo Parserbase.h generated with the current bisonc++ are compared to echo those generated with tmp/bin/binary echo exit 0 fi rm -rf tmp/bin/parser cp -r documentation/regression/$1/parser tmp/bin cd tmp/bin/parser echo bisonc++ -i parser.ih grammar mv parse.cc p.cc mv *base.h base.h echo ../binary grammar echo echo 'diff of parse.cc < by ./binary, > by bisonc++' read diff parse.cc p.cc echo echo 'diff of *base.h < by ./binary, > by bisonc++' read diff *serbase.h base.h bisonc++-4.13.01/parser/0000755000175000017500000000000012633316117013546 5ustar frankfrankbisonc++-4.13.01/parser/usesymbol.cc0000644000175000017500000000050612633316117016100 0ustar frankfrank#include "parser.ih" Symbol *Parser::useSymbol() { if (Symbol *sp = d_symtab.lookup(d_matched)) return sp; NonTerminal *np = new NonTerminal(d_matched); d_symtab.insert ( Symtab::value_type ( d_matched, d_rules.insert(np) ) ); return np; } bisonc++-4.13.01/parser/defineterminal.cc0000644000175000017500000000572012633316117017047 0ustar frankfrank#include "parser.ih" // Only with %type a symbol may already exist. // // Symbols are defined by %token, %left etc. as terminals only. // // At a %type declaration, their types can be set, but only if their types // haven't been set yet. // // A %type can also define an undetermined nonterminal: eventually // it must be defined as a nonterminal by a rule. // // If a %type is followed by a %token etc. then those latter directives may // change the kind to symbol into terminal, but not their type. // // This function is called by the `symbol' rule // Unknown symbol: // // Define the terminal as UNDETERMINED with %type, otherwise as `type' // // known symbol: // --------------------------------------------------------------------- // Second: // --------------------------------------------------------- // %token %type (-> d_typeDirective) // --------------------------------------------------------------------- // First: // (%type -> UNDETERMINED) // --------------------------------------------------------------------- // %token mult.def'd OK, unless token's %type is specified // and different from %type. If equal: warn // // %type OK, same as OK, same as upper right // upper right // --------------------------------------------------------------------- void Parser::defineTerminal(string const &name, Symbol::Type type) { if (Symbol *sp = d_symtab.lookup(name)) // known symbol? { // known symbol: upper left alternative if (not sp->isUndetermined() && not d_typeDirective) { multiplyDefined(sp); return; } if (sp->sType().empty()) sp->setStype(d_field); else if (sp->sType() == d_field) wmsg << '`' << name << "' type repeatedly specified as <" << d_field << ">" << endl; else // type clash emsg << "can't redefine type <" << sp->sType() << "> of `" << name << "' to <" << d_field << ">" << endl; } else // new symbol: insert it as (provisional or definitive) terminal d_symtab.insert ( Symtab::value_type ( name, d_rules.insert ( new Terminal(name, d_typeDirective ? Symbol::UNDETERMINED : type, type == Symbol::CHAR_TERMINAL ? d_scanner.number() : Terminal::DEFAULT, d_association, d_field), d_matched ) ) ); } bisonc++-4.13.01/parser/returnsingle.cc0000644000175000017500000000024512633316117016577 0ustar frankfrank#include "parser.ih" void Parser::returnSingle(AtDollar const &atd) const { if (atd.callsMember()) return; semTag("field", atd, &Parser::noID); } bisonc++-4.13.01/parser/nexthiddenname.cc0000644000175000017500000000032612633316117017051 0ustar frankfrank#include "parser.ih" string Parser::nextHiddenName() { s_hiddenName.clear(); s_hiddenName.str(""); s_hiddenName << "#" << setfill('0') << setw(4) << ++s_nHidden; return s_hiddenName.str(); } bisonc++-4.13.01/parser/extractindex.cc0000644000175000017500000000025112633316117016555 0ustar frankfrank#include "parser.ih" size_t Parser::extractIndex(int *idx, size_t pos) { istringstream is(d_scanner.block().substr(pos)); is >> *idx; return is.tellg(); } bisonc++-4.13.01/parser/definenonterminal.cc0000644000175000017500000000062412633316117017560 0ustar frankfrank#include "parser.ih" Symbol *Parser::defineNonTerminal(string const &name, string const &stype) { if (Symbol *sp = d_symtab.lookup(name)) { multiplyDefined(sp); return 0; } NonTerminal *np = new NonTerminal(name, stype); d_symtab.insert ( Symtab::value_type ( name, d_rules.insert(np) ) ); return np; } bisonc++-4.13.01/parser/parser.h0000644000175000017500000001642612633316117015224 0ustar frankfrank#ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #include class NonTerminal; class Terminal; class Symbol; class Options; class AtDollar; namespace FBB { class Mstream; } #undef Parser class Parser: public ParserBase { // actions to taken given tokens returned by the lexical scanner typedef std::unordered_map ActionMap; typedef ActionMap::iterator Iterator; typedef ActionMap::value_type Value; enum SemType { SINGLE, UNION, POLYMORPHIC }; enum SemTag { NONE, AUTO, EXPLICIT, }; // data members that are self-explanatory are not explicitly // described here. FBB::Arg &d_arg; Options &d_options; // $insert scannerobject Scanner d_scanner; std::string const &d_matched; Rules &d_rules; Symtab d_symtab; std::string d_expect; std::string d_field; // %union field in specs. bool d_typeDirective; // true following %type SemType d_semType; // see set{union,polymorphic}decl.cc bool d_negativeDollarIndices; Terminal::Association d_association; // associations between type-identifiers and type-definitions of // polymorphic semantic values std::unordered_map d_polymorphic; static size_t s_nHidden; // number of hidden nonterminals static std::ostringstream s_hiddenName; static char s_semanticValue[]; // name of the semantic value variable // used by the generated parser static char s_semanticValueStack[]; // name of the semantic value stack // used by the generated parser static char s_locationValueStack[]; // name of the location value stack // used by the generated parser static char s_locationValue[]; // name of the location value variable // used by the generated parser (@0) static char const s_stype__[]; // generic semantic value for POLYMORPHIC public: Parser(Rules &rules); int parse(); void cleanup(); // do cleanup following parse(); std::unordered_map const &polymorphic() const; private: void addPolymorphic(std::string const &tag, std::string const &typeSpec); void addIncludeQuotes(std::string *target); // ensure ".." or <..> // around target name void checkEmptyBlocktype(); void checkFirstType(); bool noID(std::string const &) const; bool idOK(std::string const &) const; bool findTag(std::string const &tag) const; void returnSingle(AtDollar const &atd) const; std::string returnUnion(AtDollar const &atd) const; std::string returnPolymorphic(AtDollar const &atd) const; SemTag semTag(char const *label, AtDollar const &atd, bool (Parser::*testID)(std::string const &) const) const; Symbol *defineNonTerminal(std::string const &name, std::string const &stype); void definePathname(std::string *target); void defineTerminal(std::string const &name, Symbol::Type type); void defineTokenName(std::string const &name, bool hasValue); void expectRules(); void setExpectedConflicts(); void setLocationDecl(); void setNegativeDollarIndices(); void setPrecedence(int type); void setUnionDecl(); size_t extractIndex(int *idx, size_t pos); size_t extractType(std::string *type, size_t pos, Block &block); // handles a location-value stack // reference (@) in a received action // block void handleAtSign(Block &block, AtDollar const &atd, int nElements); // handles a semantic-value stack // reference ($) in a received action // block bool handleDollar(Block &block, AtDollar const &atd, int nElements); STYPE__ handleProductionElements(STYPE__ &first, STYPE__ const &second); void handleProductionElement(STYPE__ &last); void installAction(Block &block); int indexToOffset(int idx, int nElements) const; void multiplyDefined(Symbol const *sp); void nestedBlock(Block &block); // define inner block as pseudo N std::string nextHiddenName(); // may generate error or warning: void negativeIndex(AtDollar const &atd) const; void warnAutoOverride(AtDollar const &atd) const; void warnAutoIgnored(char const *typeOrField, AtDollar const &atd) const; void warnUntaggedValue(AtDollar const &atd) const; // generating emsgs: bool errIndexTooLarge(AtDollar const &atd, int nElements) const; void errNoSemantic(char const *label, AtDollar const &atd, std::string const &id) const; void setStart(); void setPolymorphicDecl(); void openRule(std::string const &ruleName); void predefine(Terminal const *terminal); // Used by Parser() to // pre-enter into d_symtab NonTerminal *requireNonTerminal(std::string const &name); bool substituteBlock(int nElements, Block &block); // saves the default $1 value // at the beginning of a mid-rule // action block (in substituteBlock) void saveDollar1(Block &block, int offset); Symbol *useSymbol(); Terminal *useTerminal(); void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); // used in, e.g., handleDollar // to obtain # elements for // end- or mid-rule actions static int nComponents(int nElements); }; inline std::unordered_map const &Parser::polymorphic() const { return d_polymorphic; } #endif bisonc++-4.13.01/parser/warnautoignored.cc0000644000175000017500000000112312633316117017262 0ustar frankfrank#include "parser.ih" void Parser::warnAutoIgnored(char const *typeOrField, AtDollar const &atd) const { if (atd.nr() < 0) negativeIndex(atd); else { string const &autoType = d_rules.sType(); if (autoType.length()) { wmsg.setLineNr(atd.lineNr()); wmsg << "rule " << &d_rules.lastProduction() << ":\n" "\t\t`" << atd.text() << "' suppresses auto " << typeOrField << " `" << autoType << "' of `" << d_rules.name() << "'." << endl; } } } bisonc++-4.13.01/parser/errnosemantic.cc0000644000175000017500000000054012633316117016725 0ustar frankfrank#include "parser.ih" void Parser::errNoSemantic(char const *label, AtDollar const &atd, string const &id) const { emsg.setLineNr(atd.lineNr()); emsg << "rule " << &d_rules.lastProduction() << ":\n" "\t\tsemantic " << label << " `" << id << "' not defined (" << atd.text() << ")." << endl; } bisonc++-4.13.01/parser/handleatsign.cc0000644000175000017500000000152712633316117016523 0ustar frankfrank#include "parser.ih" // We're at a @ character, followed by a number, @1, @2, ... etc. @. // The number is the element number of a production rule // The @-return value is not specified in bison's documentation. Is is // d_loc__. @@ is replaced by d_loc__ void Parser::handleAtSign(Block &block, AtDollar const &atd, int nElements) { if (errIndexTooLarge(atd, nElements)) return; ostringstream os; if (atd.returnValue()) os << s_locationValue; else os << s_locationValueStack << "[" << indexToOffset(atd.nr(), nElements) << "]"; block.replace(atd.pos(), atd.length(), os.str()); if (!d_options.lspNeeded()) { wmsg << "@ used at line " << block.line() << ": %lsp-needed forced" << endl; d_options.setLspNeeded(); } } bisonc++-4.13.01/parser/expectrules.cc0000644000175000017500000000121112633316117016413 0ustar frankfrank#include "parser.ih" void Parser::expectRules() { d_scanner.clearBlock(); Terminal::resetPrecedence(); // Incremented terminal priority must be // reset to 0: any terminal char-tokens // seen below in the rules must again // receive the initial (0) priority // at the end, inspect all nonterminals. if there are any undetermined // nonterminals left, change them into true nonterminals. d_rules.setNonTerminalTypes(); // not shown in Bisonc++ 2.8.0: // lineMsg(imsg) << "Preamble (until %%) parsed" << endl; } bisonc++-4.13.01/parser/setstart.cc0000644000175000017500000000027612633316117015733 0ustar frankfrank#include "parser.ih" void Parser::setStart() { if (d_rules.startRule().size()) emsg << "%start multiply specified" << endl; else d_rules.setStartRule(d_matched); } bisonc++-4.13.01/parser/checkemptyblocktype.cc0000644000175000017500000000054412633316117020131 0ustar frankfrank#include "parser.ih" void Parser::checkEmptyBlocktype() { string const &stype = d_rules.sType(); if (stype.size()) // return type is required wmsg << "rule `" << &d_rules.lastProduction() << "': no " << d_rules.sType() << " value is returned from this " "empty production rule" << endl; } bisonc++-4.13.01/parser/README.dollar0000644000175000017500000000771712633316117015716 0ustar frankfrank Handling $-specifications: see semtag.cc ------------------------------------------------------------------------- Negative $-index e.g,, ($-1, S-1): ------------------------------------------------------------------------- specification: action: ------------------------------------------------------------------------- $-1 d_negativeDollarIndices or SINGLE or no auto assoc: no action else: warn STYPE__ is used $-1 err: no allowed for neg. indices ------------------------------------------------------------------------- ----------------------------------------------------------------------------- $$. or $i. ----------------------------------------------------------------------------- $$. and $i. are handled like $$ and $1, but the action `AUTO' (auto was specified, but $$. or $i. was used) the warnAutoIgnored warning is issued, and no field substitution takes place ----------------------------------------------------------------------------- auto: $: action: ----------------------------------------------------------------------------- id - NONE (AUTO fm semTag): warnAutoIgnored (all other combinations: as with $$ and $i, below) ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- $$ or $i specifications (i >= 0): ----------------------------------------------------------------------------- auto: $: action: ----------------------------------------------------------------------------- - - NONE STYPE__ NONE SINGLE: err: not defined semType UNION: EXPLICIT POLYMORPHIC: existing tag: EXPLICIT otherwise: ERR not defined ----------------------------------------------------------------------------- STYPE__ - NONE STYPE__ NONE SINGLE: err: not defined semType UNION: EXPLICIT POLYMORPHIC: existing tag: EXPLICIT otherwise: ERR not defined ----------------------------------------------------------------------------- id OK for UNION, possibly OK for POLYMORHPHIC - AUTO STYPE__ NONE semType UNION: EXPLICIT POLYMORPHIC: existing tag: EXPLICIT + warn otherwise: ERR not defined ----------------------------------------------------------------------------- illegal id only possible for POLYMORHPHIC - ERR: not defined STYPE__ NONE semType UNION: EXPLICIT POLYMORPHIC: existing tag: EXPLICIT otherwise: ERR not defined ----------------------------------------------------------------------------- NONE: parse.cc does not use a field specification AUTO: replace the $-spec by d_val__.field or d_val__.get<>() from d_rules.sType() EXPLICIT: replace the $-spec by d_val__.field or d_val__.get<>() from atd.id() bisonc++-4.13.01/parser/extracttype.cc0000644000175000017500000000133512633316117016433 0ustar frankfrank#include "parser.ih" // expect or not, if not at a `<' character size_t Parser::extractType(string *type, size_t pos, Block &block) { if (pos >= block.length()) // block ends prematurely. throw 1; if (block[pos] != '<') // no explicit type return 0; size_t begin = pos + 1; // first char of the type // saw the opening bracket, find the `>' size_t end = block.find('>', begin); if (end == string::npos) // no `>' found throw 1; // caught as incomplete $ specification *type = block.substr(begin, end - begin); return end + 1 - pos; // length of specification } bisonc++-4.13.01/parser/preheaders.h0000644000175000017500000000035212633316117016041 0ustar frankfrank#ifndef _INCLUDED_PREHEADERS_H_ #define _INCLUDED_PREHEADERS_H_ #include #include #include #include #include #include "../rules/rules.h" #include "../symtab/symtab.h" #endif bisonc++-4.13.01/parser/parser1.cc0000644000175000017500000000077112633316117015437 0ustar frankfrank#include "parser.ih" Parser::Parser(Rules &rules) : d_arg(Arg::instance()), d_options(Options::instance()), d_scanner(d_arg[0]), d_matched(d_scanner.matched()), d_rules(rules), d_typeDirective(false), d_semType(SINGLE), d_negativeDollarIndices(false) { d_options.setMatched(d_matched); setDebug(d_arg.option(0, "own-debug")); d_scanner.setDebug(d_arg.option(0, "scanner-debug")); predefine(Rules::errorTerminal()); predefine(Rules::eofTerminal()); } bisonc++-4.13.01/parser/inc/0000755000175000017500000000000012633316117014317 5ustar frankfrankbisonc++-4.13.01/parser/inc/typename0000644000175000017500000000011612633316117016062 0ustar frankfranktypename: '<' identifier '>' { d_field = $2; } ; bisonc++-4.13.01/parser/inc/directives0000644000175000017500000001430112633316117016402 0ustar frankfrank_baseclass_header: BASECLASS_HEADER { d_expect = "baseclass header name"; } ; _baseclass_preinclude: BASECLASS_PREINCLUDE { d_expect = "baseclass pre-include name"; } ; _class_header: CLASS_HEADER { d_expect = "class header name"; } ; _class_name: CLASS_NAME { d_expect = "class name"; } ; _expect: EXPECT { d_expect = "number (of conflicts)"; } ; _filenames: FILENAMES { d_expect = "generic name of files"; } ; _implementation_header: IMPLEMENTATION_HEADER { d_expect = "implementation header name"; } ; _incrementPrecedence: { Terminal::incrementPrecedence(); } ; _left: LEFT _typesymbol { d_association = Terminal::LEFT; } ; _locationstruct: LOCATIONSTRUCT { d_expect = "Location struct definition"; } ; _ltype: LTYPE { d_expect = "Location type specification"; } ; _namespace: NAMESPACE { d_expect = "Namespace identifier"; } ; _nonassoc: NONASSOC _typesymbol { d_association = Terminal::NONASSOC; } ; _parsefun_source: PARSEFUN_SOURCE { d_expect = "File name for the parse() member"; } ; _pushPrecedence: { $$ = Terminal::sPrecedence(); Terminal::resetPrecedence(); } ; _required: REQUIRED { d_expect = "Required number of tokens between errors"; } ; _right: RIGHT _typesymbol { d_association = Terminal::RIGHT; } ; _scanner: SCANNER { d_expect = "Path to the scanner header filename"; } ; _scanner_class_name: SCANNER_CLASS_NAME { d_expect = "Name of the Scanner class"; } ; _scanner_token_function: SCANNER_TOKEN_FUNCTION { d_expect = "Scanner function returning the next token"; } ; _scanner_matched_text_function: SCANNER_MATCHED_TEXT_FUNCTION { d_expect = "Scanner function returning the matched text"; } ; _start: START { d_expect = "Start rule" ; } ; _symbol_exp: { d_expect = "identifier or character-constant"; } ; _symbol: QUOTE { defineTerminal(d_scanner.canonicalQuote(), Symbol::CHAR_TERMINAL); } | identifier optNumber { // try to define as defineTokenName($1, $2); // symbolic terminal } ; _symbolList: _symbolList optComma _symbol | _symbol ; _symbols: _symbol_exp _symbolList optSemiCol ; _target_directory: TARGET_DIRECTORY { d_expect = "target directory"; } ; _type: TYPE { d_expect = "type-name"; d_typeDirective = true; } ; _stype: STYPE { d_expect = "STYPE type name" ; } ; _typesymbol: { d_expect = "opt. identifier(s) or char constant(s)"; } ; _token: TOKEN _typesymbol { d_association = Terminal::UNDEFINED; } ; _union: UNION { d_expect = "Semantic value union definition"; } ; _polymorphic: POLYMORPHIC { setPolymorphicDecl(); } ; _typespec: ':' { d_scanner.beginTypeSpec(); } ; _polyspec: identifier _typespec identifier // identifier holds the typespec { addPolymorphic($1, $3); } ; _polyspecs: _polyspecs ';' _polyspec | _polyspec ; _directiveSpec: _baseclass_header STRING { d_options.setBaseClassHeader(); } | _baseclass_preinclude STRING { d_options.setPreInclude(); } | _class_header STRING { d_options.setClassHeader(); } | _class_name IDENTIFIER { d_options.setClassName(); } | DEBUGFLAG { d_options.setDebug(); } | ERROR_VERBOSE { d_options.setErrorVerbose(); } | _expect NUMBER { setExpectedConflicts(); } | _filenames STRING { d_options.setGenericFilename(); } | FLEX { d_options.setFlex(); } | _implementation_header STRING { d_options.setImplementationHeader(); } | _left _incrementPrecedence optTypename _symbols | _locationstruct BLOCK optSemiCol { d_options.setLocationDecl(d_scanner.block().str()); } | LSP_NEEDED { d_options.setLspNeeded(); } | _ltype STRING { d_options.setLtype(); } | _namespace IDENTIFIER { d_options.setNamespace(); } | NEG_DOLLAR { setNegativeDollarIndices(); } | NOLINES { d_options.unsetLines(); } | _nonassoc _incrementPrecedence optTypename _symbols | _parsefun_source STRING { d_options.setParsefunSource(); } | PRINT_TOKENS { d_options.setPrintTokens(); } | _required NUMBER { d_options.setRequiredTokens(d_scanner.number()); } | _right _incrementPrecedence optTypename _symbols | _scanner STRING { d_options.setScannerInclude(); } | _scanner_class_name STRING { d_options.setScannerClassName(); } | _scanner_token_function STRING { d_options.setScannerTokenFunction(); } | _scanner_matched_text_function STRING { d_options.setScannerMatchedTextFunction(); } | _start IDENTIFIER { setStart(); } | _stype STRING { d_options.setStype(); } | _target_directory STRING { d_options.setTargetDirectory(); } | _token optTypename _pushPrecedence // make sure %token precedences are zero _symbols { Terminal::set_sPrecedence($3); } | _type typename _symbols | _union BLOCK optSemiCol { setUnionDecl(); } | _polymorphic _polyspecs optSemiCol | WEAK_TAGS { d_options.unsetStrongTags(); } | error ; _directive: _directiveSpec { d_expect.erase(); d_typeDirective = false; } ; directives: directives _directive | // empty ; bisonc++-4.13.01/parser/inc/rules0000644000175000017500000000303612633316117015376 0ustar frankfrank_precSpec: IDENTIFIER { $$ = static_cast(IDENTIFIER); } | QUOTE { $$ = static_cast(QUOTE); } ; _productionElement: QUOTE { $$ = useTerminal(); } | IDENTIFIER { $$ = useSymbol(); } | BLOCK { $$ = d_scanner.block(); } | PREC _precSpec { setPrecedence($2); $$ = STYPE__(); // $$ = spSemBase(); // a 0-ptr spSemBase indicates // that a %prec has been handled } ; _productionElements: _productionElements _productionElement { $$ = handleProductionElements($1, $2); // process the first element, return the second // if the 1st element is a block, handle it as a nested block } | _productionElement { $$ = $1; } ; _production: _productionElements { handleProductionElement($1); // process the returned element: if it's a block, it becomes the // production's action block } | { // nothing to do for this empty production. But do check for a typed // nonterminal. checkEmptyBlocktype(); } ; _productionSeparator: '|' { d_rules.addProduction(d_scanner.lineNr()); } ; _productionList: _productionList _productionSeparator _production | _production ; _ruleName: identifier ':' { openRule($1); } ; _rule: _ruleName _productionList ';' ; rules: rules _rule | // empty ; bisonc++-4.13.01/parser/inc/opt0000644000175000017500000000075112633316117015047 0ustar frankfrankoptComma: ',' | // empty ; optNumber: NUMBER { $$ = true; } | { $$ = false; } ; optSemiCol: ';' | // empty ; _tokenname: { d_expect = "token name"; } ; optTypename: typename _tokenname | _tokenname { d_field.clear(); } ; optTwo_percents: TWO_PERCENTS { wmsg << "Ignoring all input beyond the second %% token" << endl; ACCEPT(); } | // empty ; bisonc++-4.13.01/parser/inc/identifier0000644000175000017500000000010112633316117016354 0ustar frankfrankidentifier: IDENTIFIER { $$ = d_matched; } ; bisonc++-4.13.01/parser/handledollar.cc0000644000175000017500000000201312633316117016502 0ustar frankfrank#include "parser.ih" // See 'README.dollar' in this directory for a description of the actions // involved at these alternatives. bool Parser::handleDollar(Block &block, AtDollar const &atd, int nElements) { if (errIndexTooLarge(atd, nElements)) return atd.returnValue(); ostringstream os; string replacement; if (atd.returnValue()) replacement = s_semanticValue; else { os << s_semanticValueStack << "[" << indexToOffset(atd.nr(), nElements) << "]"; replacement = os.str(); } switch (d_semType) { case SINGLE: returnSingle(atd); break; case UNION: replacement += returnUnion(atd); break; case POLYMORPHIC: replacement += returnPolymorphic(atd); break; } // replace $$ by the semantic value block.replace(atd.pos(), atd.length(), replacement); return atd.returnValue(); // $$ is used } bisonc++-4.13.01/parser/error.cc0000644000175000017500000000102712633316117015206 0ustar frankfrank#include "parser.ih" void Parser::error(char const *msg) { static bool repeated; static string lastMsg; if (d_expect.empty()) { if (not repeated) emsg << "unrecognized input (`" << d_matched << "') encountered." << endl; repeated = true; } else { if (lastMsg != d_expect) emsg << "at `" << d_matched << "': " << d_expect << " expected." << endl; repeated = false; } lastMsg = d_expect; } bisonc++-4.13.01/parser/grammar0000644000175000017500000000257612633316117015131 0ustar frankfrank%no-lines %filenames parser %scanner ../scanner/scanner.h %baseclass-preinclude preheaders.h %debug %print-tokens %token BASECLASS_HEADER BASECLASS_PREINCLUDE BLOCK CLASS_HEADER CLASS_NAME DEBUGFLAG ERROR_VERBOSE EXPECT FILENAMES FLEX IDENTIFIER IMPLEMENTATION_HEADER LEFT LOCATIONSTRUCT LSP_NEEDED LTYPE NAMESPACE NEG_DOLLAR NOLINES NONASSOC NUMBER PARSEFUN_SOURCE POLYMORPHIC PREC PRINT_TOKENS QUOTE REQUIRED RIGHT SCANNER SCANNER_CLASS_NAME SCANNER_MATCHED_TEXT_FUNCTION SCANNER_TOKEN_FUNCTION START STRING STYPE TARGET_DIRECTORY TOKEN TWO_PERCENTS TYPE UNION WEAK_TAGS %polymorphic BOOL: bool; SIZE_T: size_t; TEXT: std::string; BLOCK: Block; TERMINAL: Terminal *; SYMBOL: Symbol *; %type identifier %type optNumber %type _pushPrecedence _precSpec // NEW: USE THIS TO ASSIGN A GENERIC POLYMORPHIC VALUE TO (NON-)TERMINALS // these (non-)terminals MUST explicitly return an STYPE__ %type _productionElements _productionElement %% input: directives _two_percents rules optTwo_percents ; _two_percents: TWO_PERCENTS { expectRules(); } ; %include inc/identifier %include inc/typename %include inc/opt %include inc/directives %include inc/rules bisonc++-4.13.01/parser/semtag.cc0000644000175000017500000000446112633316117015342 0ustar frankfrank#include "parser.ih" Parser::SemTag Parser::semTag(char const *label, AtDollar const &atd, bool (Parser::*testID)(std::string const &) const) const { string const &stype = atd.returnValue() ? d_rules.sType() : d_rules.sType(atd.nr()); // get the rule/element's stype string const *id = &atd.id(); if (atd.nr() < 0) { negativeIndex(atd); return NONE; } if // with polymorphic: warn if an untyped $-value is used ( d_semType == POLYMORPHIC && //id->empty() not atd.returnValue() && stype.empty() && not atd.stype() ) warnUntaggedValue(atd); if (stype.empty() || stype == s_stype__) // no rule stype { if (atd.id().empty()) // no explicit tag either return NONE; if (atd.stype()) // STYPE__ requested return NONE; if ((this->*testID)(atd.id())) return EXPLICIT; } else if ((this->*testID)(stype)) // auto tag/field { if (atd.id().empty()) // but no explicit tag/field { if (atd.callsMember()) { warnAutoIgnored(label, atd); return NONE; } return AUTO; } if (atd.stype()) // ignoring auto tag/field return NONE; if ((this->*testID)(atd.id())) // tag/field override { if (d_semType == POLYMORPHIC) warnAutoOverride(atd); return EXPLICIT; } } else // illegal stype { if (atd.id().empty()) // no explicit type, but id = &stype; // auto is illegal: set ptr. else { if (atd.stype()) // no explicit tag requested return NONE; if ((this->*testID)(atd.id())) { if (d_semType == POLYMORPHIC) warnAutoOverride(atd); return EXPLICIT; } } } errNoSemantic(label, atd, *id); return NONE; } bisonc++-4.13.01/parser/warnuntaggedvalue.cc0000644000175000017500000000037512633316117017605 0ustar frankfrank#include "parser.ih" void Parser::warnUntaggedValue(AtDollar const &atd) const { wmsg.setLineNr(atd.lineNr()); wmsg << "rule " << &d_rules.lastProduction() << ":\n" "\t\tusing untagged semantic value `" << atd.text() << "'." << endl; } bisonc++-4.13.01/parser/cleanup.cc0000644000175000017500000000135712633316117015512 0ustar frankfrank#include "parser.ih" void Parser::cleanup() { d_rules.clearLocations(); // locations aren't required anymore if (!d_rules.hasRules() || !d_rules.nProductions()) fmsg << "No production rules" << endl; emsg.setTag("error"); if (emsg.count()) // Terminate if parsing produced errors. throw 1; d_options.setAccessorVariables(); d_rules.augmentGrammar(d_symtab.lookup(d_rules.startRule())); d_options.setVerbosity(); // prepare Msg for verbose output // (--verbose, --construction) d_options.showFilenames(); // shows the verbosity-filename, otherwise // independent of the verbosity setting } bisonc++-4.13.01/parser/definepathname.cc0000644000175000017500000000027412633316117017030 0ustar frankfrank#include "parser.ih" void Parser::definePathname(string *target) { if (target->empty()) // assign only if not yet assigned *target = Options::undelimit(*target); } bisonc++-4.13.01/parser/setprecedence.cc0000644000175000017500000000115612633316117016671 0ustar frankfrank#include "parser.ih" void Parser::setPrecedence(int type) { Symbol *sp = 0; // to prevent `sp uninitialized' warning by the // compiler switch (type) { case IDENTIFIER: sp = d_symtab.lookup(d_matched); break; case QUOTE: sp = d_symtab.lookup(d_scanner.canonicalQuote()); break; } if (sp && sp->isTerminal()) d_rules.setPrecedence(Terminal::downcast(sp)); else emsg << "`%prec " << d_matched << "': `" << d_matched << "' must be a declared terminal token" << endl; } bisonc++-4.13.01/parser/returnunion.cc0000644000175000017500000000077012633316117016451 0ustar frankfrank#include "parser.ih" string Parser::returnUnion(AtDollar const &atd) const { string ret; switch (semTag("field", atd, &Parser::idOK)) { case NONE: break; case AUTO: ret = "." + ( atd.returnValue() ? d_rules.sType() : d_rules.sType(atd.nr()) ); break; case EXPLICIT: ret = "." + atd.id(); break; } return ret; } bisonc++-4.13.01/parser/parserbase.h0000644000175000017500000002700612633316117016053 0ustar frankfrank// Generated by Bisonc++ V4.13.00 on Wed, 14 Oct 2015 13:59:55 +0200 #ifndef ParserBase_h_included #define ParserBase_h_included #include #include #include // $insert preincludes #include #include "preheaders.h" // $insert debugincludes #include #include #include #include #include namespace // anonymous { struct PI__; } // $insert polymorphic enum class Tag__ { SYMBOL, TERMINAL, BOOL, TEXT, SIZE_T, BLOCK, }; namespace Meta__ { template struct TypeOf; template struct TagOf; // $insert polymorphicSpecializations template <> struct TagOf { static Tag__ const tag = Tag__::SYMBOL; }; template <> struct TagOf { static Tag__ const tag = Tag__::TERMINAL; }; template <> struct TagOf { static Tag__ const tag = Tag__::BOOL; }; template <> struct TagOf { static Tag__ const tag = Tag__::TEXT; }; template <> struct TagOf { static Tag__ const tag = Tag__::SIZE_T; }; template <> struct TagOf { static Tag__ const tag = Tag__::BLOCK; }; template <> struct TypeOf { typedef Symbol * type; }; template <> struct TypeOf { typedef Terminal * type; }; template <> struct TypeOf { typedef bool type; }; template <> struct TypeOf { typedef std::string type; }; template <> struct TypeOf { typedef size_t type; }; template <> struct TypeOf { typedef Block type; }; // The Base class: // Individual semantic value classes are derived from this class. // This class offers a member returning the value's Tag__ // and two member templates get() offering const/non-const access to // the actual semantic value type. class Base { Tag__ d_tag; protected: Base(Tag__ tag); public: Base(Base const &other) = delete; Tag__ tag() const; template typename TypeOf::type &get(); template typename TypeOf::type const &get() const; }; // The class Semantic is derived from Base. It stores a particular // semantic value type. template class Semantic: public Base { typedef typename TypeOf::type DataType; DataType d_data; public: // The constructor forwards arguments to d_data, allowing // it to be initialized using whatever constructor is // available for DataType template Semantic(Params &&...params); DataType &data(); DataType const &data() const; }; // If Type is default constructible, Initializer::value is // initialized to new Type, otherwise it's initialized to 0, allowing // struct SType: public std::shared_ptr to initialize its // shared_ptr class whether or not Base is default // constructible. template struct Initializer { static Type *value; }; template Type *Initializer::value = new Type; template struct Initializer { static constexpr Type *value = 0; }; // The class Stype wraps the shared_ptr holding a pointer to Base. // It becomes the polymorphic STYPE__ // It also wraps Base's get members, allowing constructions like // $$.get to be used, rather than $$->get. // Its operator= can be used to assign a Semantic * // directly to the SType object. The free functions (in the parser's // namespace (if defined)) semantic__ can be used to obtain a // Semantic *. struct SType: public std::shared_ptr { SType(); template SType &operator=(Tp_ &&value); Tag__ tag() const; // this get()-member checks for 0-pointer and correct tag // in shared_ptr, and resets the shared_ptr's Base * // to point to Meta::__Semantic() if not template typename TypeOf::type &get(); template typename TypeOf::type const &get() const; // the data()-member does not check, and may result in a // segfault if used incorrectly template typename TypeOf::type &data(); template typename TypeOf::type const &data() const; template void emplace(Args &&...args); }; } // namespace Meta__ class ParserBase { public: // $insert tokens // Symbolic tokens: enum Tokens__ { BASECLASS_HEADER = 257, BASECLASS_PREINCLUDE, BLOCK, CLASS_HEADER, CLASS_NAME, DEBUGFLAG, ERROR_VERBOSE, EXPECT, FILENAMES, FLEX, IDENTIFIER, IMPLEMENTATION_HEADER, LEFT, LOCATIONSTRUCT, LSP_NEEDED, LTYPE, NAMESPACE, NEG_DOLLAR, NOLINES, NONASSOC, NUMBER, PARSEFUN_SOURCE, POLYMORPHIC, PREC, PRINT_TOKENS, QUOTE, REQUIRED, RIGHT, SCANNER, SCANNER_CLASS_NAME, SCANNER_MATCHED_TEXT_FUNCTION, SCANNER_TOKEN_FUNCTION, START, STRING, STYPE, TARGET_DIRECTORY, TOKEN, TWO_PERCENTS, TYPE, UNION, WEAK_TAGS, }; // $insert STYPE typedef Meta__::SType STYPE__; private: int d_stackIdx__; std::vector d_stateStack__; std::vector d_valueStack__; protected: enum Return__ { PARSE_ACCEPT__ = 0, // values used as parse()'s return values PARSE_ABORT__ = 1 }; enum ErrorRecovery__ { DEFAULT_RECOVERY_MODE__, UNEXPECTED_TOKEN__, }; bool d_debug__; size_t d_nErrors__; size_t d_requiredTokens__; size_t d_acceptedTokens__; int d_token__; int d_nextToken__; size_t d_state__; STYPE__ *d_vsp__; STYPE__ d_val__; STYPE__ d_nextVal__; ParserBase(); // $insert debugdecl static std::ostringstream s_out__; std::string symbol__(int value) const; std::string stype__(char const *pre, STYPE__ const &semVal, char const *post = "") const; static std::ostream &dflush__(std::ostream &out); void ABORT() const; void ACCEPT() const; void ERROR() const; void clearin(); bool debug() const; void pop__(size_t count = 1); void push__(size_t nextState); void popToken__(); void pushToken__(int token); void reduce__(PI__ const &productionInfo); void errorVerbose__(); size_t top__() const; public: void setDebug(bool mode); }; inline bool ParserBase::debug() const { return d_debug__; } inline void ParserBase::setDebug(bool mode) { d_debug__ = mode; } inline void ParserBase::ABORT() const { // $insert debug if (d_debug__) s_out__ << "ABORT(): Parsing unsuccessful" << "\n" << dflush__; throw PARSE_ABORT__; } inline void ParserBase::ACCEPT() const { // $insert debug if (d_debug__) s_out__ << "ACCEPT(): Parsing successful" << "\n" << dflush__; throw PARSE_ACCEPT__; } inline void ParserBase::ERROR() const { // $insert debug if (d_debug__) s_out__ << "ERROR(): Forced error condition" << "\n" << dflush__; throw UNEXPECTED_TOKEN__; } // $insert polymorphicInline namespace Meta__ { inline Base::Base(Tag__ tag) : d_tag(tag) {} inline Tag__ Base::tag() const { return d_tag; } template template inline Semantic::Semantic(Params &&...params) : Base(tg_), d_data(std::forward(params) ...) {} template inline typename TypeOf::type &Semantic::data() { return d_data; } template inline typename TypeOf::type const &Semantic::data() const { return d_data; } template inline typename TypeOf::type &Base::get() { return static_cast *>(this)->data(); } template inline typename TypeOf::type const &Base::get() const { return static_cast *>(this)->data(); } inline SType::SType() : std::shared_ptr{ Initializer< std::is_default_constructible::value, Base >::value } {} inline Tag__ SType::tag() const { return std::shared_ptr::get()->tag(); } template inline typename TypeOf::type &SType::get() { // if we're not yet holding a (tg_) value, initialize to // a Semantic holding a default value if (std::shared_ptr::get() == 0 || tag() != tg_) { typedef Semantic SemType; if (not std::is_default_constructible< typename TypeOf::type >::value ) throw std::runtime_error( "STYPE::get: no default constructor available"); reset(new SemType); } return std::shared_ptr::get()->get(); } template inline typename TypeOf::type &SType::data() { return std::shared_ptr::get()->get(); } template inline typename TypeOf::type const &SType::data() const { return std::shared_ptr::get()->get(); } template inline void SType::emplace(Params &&...params) { reset(new Semantic(std::forward(params) ...)); } template struct Assign; template struct Assign { static SType &assign(SType *lhs, Tp_ &&tp); }; template struct Assign { static SType &assign(SType *lhs, Tp_ const &tp); }; template <> struct Assign { static SType &assign(SType *lhs, SType const &tp); }; template inline SType &Assign::assign(SType *lhs, Tp_ &&tp) { lhs->reset(new Semantic::tag>(std::move(tp))); return *lhs; } template inline SType &Assign::assign(SType *lhs, Tp_ const &tp) { lhs->reset(new Semantic::tag>(tp)); return *lhs; } inline SType &Assign::assign(SType *lhs, SType const &tp) { return lhs->operator=(tp); } template inline SType &SType::operator=(Tp_ &&rhs) { return Assign< std::is_rvalue_reference::value, typename std::remove_reference::type >::assign(this, std::forward(rhs)); } } // namespace Meta__ // As a convenience, when including ParserBase.h its symbols are available as // symbols in the class Parser, too. #define Parser ParserBase #endif bisonc++-4.13.01/parser/returnpolymorphic.cc0000644000175000017500000000125012633316117017660 0ustar frankfrank#include "parser.ih" string Parser::returnPolymorphic(AtDollar const &atd) const { string ret = atd.returnValue() ? ".get" : ".data"; switch (semTag("tag", atd, &Parser::findTag)) { case AUTO: // use the %type specified semantic value ret += "()"; break; case EXPLICIT: ret += "()"; break; case NONE: ret.clear(); break; } return ret; } bisonc++-4.13.01/parser/parser.ih0000644000175000017500000000212512633316117015364 0ustar frankfrank // Include this file in the sources of the class Parser. // $insert class.h #include "parser.h" #include #include #include "../block/block.h" #include "../options/options.h" using namespace std; using namespace FBB; inline bool Parser::noID(string const &) const { return false; } inline bool Parser::idOK(string const &) const { return true; } inline bool Parser::findTag(string const &tag) const { return d_polymorphic.find(tag) != d_polymorphic.end(); } inline int Parser::nComponents(int nElements) { return nElements >= 0 ? nElements : -nElements - 1; } inline void Parser::print() { if (d_arg.option('T')) print__(); } inline void Parser::setNegativeDollarIndices() { d_negativeDollarIndices = true; } inline void Parser::setExpectedConflicts() { Rules::setExpectedConflicts(d_scanner.number()); } // $insert lex inline int Parser::lex() { return d_scanner.lex(); } inline void Parser::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } bisonc++-4.13.01/parser/driver/0000755000175000017500000000000012633316117015041 5ustar frankfrankbisonc++-4.13.01/parser/driver/build0000755000175000017500000003746612633316117016106 0ustar frankfrank#!/usr/bin/icmake -qt/tmp/driver // script generated by the C++ icmake script version 2.30 /* Configurable defines: CLASSES: string of directory-names under which sources of classes are found. E.g., CLASSES = "class1 class2" All class-names must be stored in one string. If classes are removed from the CLASSES definition or if the names in the CLASSES definition are reordered, the compilation should start again from scratch. */ string CLASSES; void setClasses() { // ADD ADDITIONAL DIRECTORIES CONTAINING SOURCES OF CLASSES HERE // Use the construction `CLASSES += "classname1 classname2";' etc. CLASSES += " "; } /* Default values for the following variables are found in $IM/default/defines.im BISON_FLAGS: This directive is only used when a grammar is generated using bison++. It defines the set of flags that are given to bison++ when bison++ generates the parser. By default the following flags are specified: -v -l The -d and -o flags are always provided (not configurable) BUILD_LIBRARY: define this if you want to create a library for the object modules. Undefined by default: so NO LIBRARY IS BUILT. This links ALL object files to a program, which is a faster process than linking to a library. But it can bloat the executable: all o-modules, rather than those that are really used, become part of the program's code. BUILD_PROGRAM: define if a program is to be built. If not defined, library maintenance is assumed (by default it is defined to the name of the program to be created). COMPILER: The compiler to use. COPT: C-options used by COMPILER LOPT: Define this (default: to "`wx-config --lib`") if a wxWindows program is constructed. In that case, you probably also want to define the COPT option `wx-config --cxxflags`, using, e.g., the following definition: #define COPT "-Wall `wx-config --cxxflags`" ECHO_REQUEST: ON (default) if command echoing is wanted, otherwise: set to OFF GDB: define if gdb-symbolic debug information is wanted (not defined by default) GRAMMAR_LINES: When this directive is defined, #line directives will be generated at the first C++ compound statement in each individual grammar specification file. Undefine if no such #line directives are required. LIBS: Extra libraries used for linking LIBPATH: Extra library-searchpaths used for linking QT: Define this (default: to "qt") if the unthreaded QT library is used. Define as "qt-mt" if the threaded QT library is used. If set, header files are grepped for the occurrence of the string '^[[:space:]]*Q_OBJECT[[:space:]]*$'. If found, moc -o moc.cc .h is called if the moc-file doesn't exist or is older than the .h file. Also, if defined the proper QT library is linked, assuming that the library is found in the ld-search path (E.g., see the environment variable $LIBRARY_PATH). Note that namespaces are NOT part of the build-script: they are only listed below for convenience. When they must be altered, the defaults must be changed in $IM/default/defines.im RELINK: Defined by default, causing a program to be relinked every time the script is called. Do not define it if relinking should only occur if a source is compiled. No effect for library maintenance. Current values: */ //#define BUILD_LIBRARY #define BUILD_PROGRAM "driver" #define COMPILER "g++" #define COPT "-Wall" //#define LOPT "`wx-config --libs`" #define ECHO_REQUEST 1 //#define GDB "-g" #define LIBS "" #define LIBPATH "" // local namespace is: FBB // using-declarations generated for: std:FBB // qt-mt can be used to select the threaded QT library //#define QT "qt" // NO CONFIGURABLE PARTS BELOW THIS LINE /* V A R S . I M */ string // contain options for cwd, // current WD libs, // extra libs, e.g., "-lrss -licce" libpath, // extra lib-paths, eg, "-L../rss" copt, // Compiler options lopt, // Linker options libxxx, // full library-path ofiles, // wildcards for o-files sources, // sources to be used current, // contains name of current dir. programname; // the name of the program to create int nClasses, // number of classes/subdirectories program; // 1: program is built list classes; // list of classes/directories /* parser.im */ void parser() { #ifdef GRAMBUILD chdir("parser/gramspec"); #ifdef GRAMMAR_LINES system("./grambuild lines"); #else system("./grambuild"); #endif chdir(".."); if ( exists("grammar") && "grammar" younger "parse.cc" ) // new parser needed #ifdef BISON_FLAGS exec("bisonc++", BISON_FLAGS, "grammar"); #else exec("bisonc++", "grammar"); #endif chdir(".."); #endif } /* S C A N N E R . I M */ void scanner() { string interactive; #ifdef INTERACTIVE interactive = "-I"; #endif #ifdef GRAMBUILD chdir("scanner"); if ( // new lexer needed exists("lexer") && ( "lexer" younger "yylex.cc" || ( exists("../parser/parser.h") && "../parser/parser.h" younger "yylex.cc" ) ) ) exec("flex", interactive, "lexer"); chdir(".."); #endif } /* I N I T I A L . I M */ void initialize() { echo(ECHO_REQUEST); sources = "*.cc"; ofiles = "o/*.o"; // std set of o-files copt = COPT; #ifdef GDB copt += " " + GDB; #endif #ifdef BUILD_PROGRAM program = 1; programname = BUILD_PROGRAM; #else program = 0; #endif; cwd = chdir("."); #ifdef GRAMBUILD if (exists("parser")) // subdir parser exists { CLASSES += "parser "; parser(); } if (exists("scanner")) // subdir scanner exists { CLASSES += "scanner "; scanner(); } #endif setClasses(); // remaining classes classes = strtok(CLASSES, " "); // list of classes nClasses = sizeof(classes); } /* M O C . I M */ void moc(string class) { string hfile; string mocfile; int ret; hfile = class + ".h"; mocfile = "moc" + class + ".cc"; if ( hfile younger mocfile // no mocfile or younger h file && // and Q_OBJECT found in .h file !system(P_NOCHECK, "grep '^[[:space:]]*Q_OBJECT[;[:space:]]*$' " + hfile) ) // then call moc. system("moc -o " + mocfile + " " + hfile); } /* O B J F I L E S . I M */ list objfiles(list files) { string file, objfile; int i; for (i = 0; i < sizeof(files); i++) { file = element(i, files); // determine element of the list objfile = "./o/" + change_ext(file, "o"); // make obj-filename if (objfile younger file) // objfile is younger { files -= (list)file; // remove the file from the list i--; // reduce i to test the next } } return (files); } /* A L T E R E D . I M */ list altered(list files, string target) { int i; string file; for (i = 0; i < sizeof(files); i++) // try all elements of the list { file = element(i, files); // use element i of the list if (file older target) // a file is older than the target { files -= (list)file; // remove the file from the list i--; // reduce i to inspect the next } // file of the list } return (files); // return the new list } /* F I L E L I S T . I M */ list file_list(string type, string library) { list files; files = makelist(type); // make all files of certain type #ifdef BUILD_LIBRARY files = altered(files, library); // keep all files newer than lib. #endif files = objfiles(files); // remove if younger .obj exist return (files); } /* L I N K . I M */ void link(string library) { printf("\n"); exec(COMPILER, "-o", programname, #ifdef BUILD_LIBRARY "-l" + library, #else ofiles, #endif libs, #ifdef QT "-l" + QT, #endif "-L.", libpath, lopt #ifdef LOPT , LOPT #endif #ifndef GDB , "-s" #endif ); printf("ok: ", programname, "\n"); } /* P R E F I X C L . I M */ void prefix_class(string class_id) { list o_files; string o_file; int i; chdir("o"); o_files = makelist("*.o"); for (i = 0; o_file = element(i, o_files); i++) exec("mv", o_file, class_id + o_file); chdir(".."); } /* R M C L A S S P . I M */ #ifdef BUILD_LIBRARY string rm_class_id(string class_id, string ofile) { string ret; int index, n; n = strlen(ofile); for (index = strlen(class_id); index < n; index++) ret += element(index, ofile); return ret; } #endif void rm_class_prefix(string class_id) { #ifdef BUILD_LIBRARY list o_files; string o_file; int i; chdir("o"); o_files = makelist("*.o"); for (i = 0; o_file = element(i, o_files); i++) exec("mv", o_file, rm_class_id(class_id, o_file)); chdir(".."); #endif } /* C C O M P I L E . I M */ void c_compile(list cfiles) { string nextfile; int i; if (!exists("o")) system("mkdir o"); if (sizeof(cfiles)) // files to compile ? { printf("\ncompiling: ", current, "\n\n"); // compile all files separately for (i = 0; nextfile = element(i, cfiles); i++) exec(COMPILER, "-c -o o/" + change_ext(nextfile, "o"), copt, nextfile); printf("\n"); } printf("ok: ", current, "\n"); } /* U P D A T E L I . I M */ void updatelib(string library) { #ifdef BUILD_LIBRARY list arlist, objlist; string to, from; objlist = makelist("o/*.o"); if (!sizeof(objlist)) return; printf("\n"); exec("ar", "rvs", library, "o/*.o"); exec("rm", "o/*.o"); printf("\n"); #endif } /* S T D C P P . I M */ void std_cpp(string library) { list cfiles; cfiles = file_list(sources, library); // make list of all cpp-files c_compile(cfiles); // compile cpp-files } /* C P P M A K E . C CPP files are processed by stdmake. Arguments of CPPMAKE: cpp_make( string mainfile, : name of the main .cpp file, or "" for library maintenance string library, : name of the local library to use/create (without lib prefix or .a/.so suffix (E.g., use `main' for `libmain.a') ) Both mainfile and library MUST be in the current directory */ void cpp_make(string mainfile, string library) { int index; string class; if (nClasses) ofiles += " */o/*.o"; // set ofiles for no LIBRARY use // make library name #ifdef BUILD_LIBRARY libxxx = chdir(".") + "lib" + library + ".a"; #endif // first process all classes for (index = 0; index < nClasses; index++) { class = element(index, classes); // next class to process chdir(class); // change to directory current = "subdir " + class; #ifdef QT moc(class); // see if we should call moc #endif std_cpp(libxxx); // compile all files chdir(cwd); // go back to parent dir } current = "auxiliary " + sources + " files"; std_cpp(libxxx); // compile all files in current dir #ifdef BUILD_LIBRARY // prefix class-number for .o files for (index = 0; index < nClasses; index++) { current = element(index, classes); // determine class name chdir( current); // chdir to a class directory. prefix_class((string)index); updatelib(libxxx); chdir(cwd); // go back to parent dir } current = ""; // no class anymore updatelib(libxxx); // update lib in current dir #endif if (mainfile != "") // mainfile -> do link { link(library); printf ( "\nProgram construction completed.\n" "\n" ); } } /* S E T L I B S . I M */ void setlibs() { #ifdef LIBS int n, index; list cut; cut = strtok(LIBS, " "); // cut op libraries n = sizeof(cut); for (index = 0; index < n; index++) libs += " -l" + element(index, cut); cut = strtok(LIBPATH, " "); // cut up the paths n = sizeof(cut); for (index = 0; index < n; index++) libpath += " -L" + element(index, cut); #endif } void main() { initialize(); setlibs(); #ifdef BUILD_PROGRAM cpp_make ( "driver.cc", // program source "driver" // static program library ); #else cpp_make ( "", "driver" // static- or so-library ); #endif } bisonc++-4.13.01/parser/driver/driver.h0000644000175000017500000000024612633316117016507 0ustar frankfrank#ifndef _INCLUDED_DRIVER_H_ #define _INCLUDED_DRIVER_H_ #include #include namespace FBB { } using namespace std; using namespace FBB; #endif bisonc++-4.13.01/parser/driver/driver.cc0000644000175000017500000000045312633316117016645 0ustar frankfrank/* driver.cc */ #include "driver.h" #include "../inputgrammar.h" using namespace std; using namespace FBB; int main(int argc, char **argv, char **envp) { Arg &arg = Arg::initialize("", argc, argv); InputGrammar input; input.parse(); return 0; } bisonc++-4.13.01/parser/handleproductionelements.cc0000644000175000017500000000211012633316117021146 0ustar frankfrank#include "parser.ih" Parser::STYPE__ Parser::handleProductionElements(STYPE__ &first, STYPE__ const &second) { if (!first) // the first PTag was a %prec specification return second; if (!second) // the second PTag was a %prec specification: return first; // postpone handling of first // maybe also when currentRule == 0 ? See addProduction if (!d_rules.hasRules()) // may happen if the first rule could not be return first; // defined because of token/rulename clash switch (first->tag()) { case Tag__::TERMINAL: d_rules.addElement(first.get()); break; case Tag__::SYMBOL: d_rules.addElement(first.get()); break; case Tag__::BLOCK: nestedBlock(first.get()); break; default: // can't occur, but used to keep the break; // compiler from generating a warning } return second; } bisonc++-4.13.01/parser/data.cc0000644000175000017500000000172412633316117014772 0ustar frankfrank#include "parser.ih" size_t Parser::s_nHidden; ostringstream Parser::s_hiddenName; char Parser::s_semanticValue[] = "d_val__"; // name of the semantic value variable // used by the generated parser. char Parser::s_semanticValueStack[] = "d_vsp__"; // name of the semantic value stack // used by the generated parser char Parser::s_locationValueStack[] = "d_lsp__"; // name of the location value stack // used by the generated parser char Parser::s_locationValue[] = "d_loc__"; // name of the location value variable // used by the generated parser (@0) char const Parser::s_stype__[] = "STYPE__"; // generic semantic value for POLYMORPHIC bisonc++-4.13.01/parser/savedollar1.cc0000644000175000017500000000035312633316117016273 0ustar frankfrank#include "parser.ih" void Parser::saveDollar1(Block &block, int offset) { ostringstream out; out << s_semanticValue << " = " << s_semanticValueStack << '[' << offset << "];\n"; block.insert(0, out.str()); } bisonc++-4.13.01/parser/warnautooverride.cc0000644000175000017500000000103412633316117017453 0ustar frankfrank#include "parser.ih" void Parser::warnAutoOverride(AtDollar const &atd) const { if (atd.text() != d_rules.sType()) return; wmsg.setLineNr(atd.lineNr()); wmsg << &d_rules.lastProduction() << ":\n" "\t\t`" << atd.text() << "' overrides auto tag <" << d_rules.sType() << "> for `" << ( atd.nr() == numeric_limits::max() ? d_rules.name() : d_rules.symbol(atd.nr())->name() ) << "'." << endl; } bisonc++-4.13.01/parser/multiplydefined.cc0000644000175000017500000000044612633316117017257 0ustar frankfrank#include "parser.ih" void Parser::multiplyDefined(Symbol const *sp) { Terminal::inserter(&Terminal::plainName); NonTerminal::inserter(&NonTerminal::plainName); emsg << (sp->isTerminal() ? "Terminal " : "Nonterminal ") << sp << " multiply defined" << endl; } bisonc++-4.13.01/parser/predefine.cc0000644000175000017500000000043712633316117016022 0ustar frankfrank#include "parser.ih" void Parser::predefine(Terminal const *terminal) { d_symtab.insert ( Symtab::value_type ( terminal->name(), d_rules.insert(const_cast(terminal), terminal->name()) ) ).first->second->used(); } bisonc++-4.13.01/parser/addpolymorphic.cc0000644000175000017500000000054312633316117017075 0ustar frankfrank#include "parser.ih" void Parser::addPolymorphic(string const &tag, string const &typeSpec) { if (d_semType != POLYMORPHIC) return; if (d_polymorphic.find(tag) != d_polymorphic.end()) emsg << "Polymorphic semantic tag `" << tag << "' multiply defined" << endl; else d_polymorphic[tag] = typeSpec; } bisonc++-4.13.01/parser/parse.cc0000644000175000017500000020223312633316117015171 0ustar frankfrank// Generated by Bisonc++ V4.13.00 on Wed, 14 Oct 2015 13:59:55 +0200 // $insert class.ih #include "parser.ih" // The FIRST element of SR arrays shown below uses `d_type', defining the // state's type, and `d_lastIdx' containing the last element's index. If // d_lastIdx contains the REQ_TOKEN bitflag (see below) then the state needs // a token: if in this state d_token__ is _UNDETERMINED_, nextToken() will be // called // The LAST element of SR arrays uses `d_token' containing the last retrieved // token to speed up the (linear) seach. Except for the first element of SR // arrays, the field `d_action' is used to determine what to do next. If // positive, it represents the next state (used with SHIFT); if zero, it // indicates `ACCEPT', if negative, -d_action represents the number of the // rule to reduce to. // `lookup()' tries to find d_token__ in the current SR array. If it fails, and // there is no default reduction UNEXPECTED_TOKEN__ is thrown, which is then // caught by the error-recovery function. // The error-recovery function will pop elements off the stack until a state // having bit flag ERR_ITEM is found. This state has a transition on _error_ // which is applied. In this _error_ state, while the current token is not a // proper continuation, new tokens are obtained by nextToken(). If such a // token is found, error recovery is successful and the token is // handled according to the error state's SR table and parsing continues. // During error recovery semantic actions are ignored. // A state flagged with the DEF_RED flag will perform a default // reduction if no other continuations are available for the current token. // The ACCEPT STATE never shows a default reduction: when it is reached the // parser returns ACCEPT(). During the grammar // analysis phase a default reduction may have been defined, but it is // removed during the state-definition phase. // So: // s_x[] = // { // [_field_1_] [_field_2_] // // First element: {state-type, idx of last element}, // Other elements: {required token, action to perform}, // ( < 0: reduce, // 0: ACCEPT, // > 0: next state) // Last element: {set to d_token__, action to perform} // } // When the --thread-safe option is specified, all static data are defined as // const. If --thread-safe is not provided, the state-tables are not defined // as const, since the lookup() function below will modify them // $insert debugincludes #include #include #include #include #include namespace // anonymous { char const author[] = "Frank B. Brokken (f.b.brokken@rug.nl)"; enum { STACK_EXPANSION = 5 // size to expand the state-stack with when // full }; enum ReservedTokens { PARSE_ACCEPT = 0, // `ACCEPT' TRANSITION _UNDETERMINED_ = -2, _EOF_ = -1, _error_ = 256 }; enum StateType // modify statetype/data.cc when this enum changes { NORMAL, ERR_ITEM, REQ_TOKEN, ERR_REQ, // ERR_ITEM | REQ_TOKEN DEF_RED, // state having default reduction ERR_DEF, // ERR_ITEM | DEF_RED REQ_DEF, // REQ_TOKEN | DEF_RED ERR_REQ_DEF // ERR_ITEM | REQ_TOKEN | DEF_RED }; struct PI__ // Production Info { size_t d_nonTerm; // identification number of this production's // non-terminal size_t d_size; // number of elements in this production }; struct SR__ // Shift Reduce info, see its description above { union { int _field_1_; // initializer, allowing initializations // of the SR s_[] arrays int d_type; int d_token; }; union { int _field_2_; int d_lastIdx; // if negative, the state uses SHIFT int d_action; // may be negative (reduce), // postive (shift), or 0 (accept) size_t d_errorState; // used with Error states }; }; // $insert staticdata // Productions Info Records: PI__ const s_productionInfo[] = { {0, 0}, // not used: reduction values are negative {304, 4}, // 1: input -> directives _two_percents rules optTwo_percents {306, 1}, // 2: _two_percents (TWO_PERCENTS) -> TWO_PERCENTS {309, 1}, // 3: identifier (IDENTIFIER) -> IDENTIFIER {310, 3}, // 4: typename ('<') -> '<' identifier '>' {311, 1}, // 5: optComma (',') -> ',' {311, 0}, // 6: optComma -> {312, 1}, // 7: optNumber (NUMBER) -> NUMBER {312, 0}, // 8: optNumber -> {313, 1}, // 9: optSemiCol (';') -> ';' {313, 0}, // 10: optSemiCol -> {314, 0}, // 11: _tokenname -> {315, 2}, // 12: optTypename -> typename _tokenname {315, 1}, // 13: optTypename -> _tokenname {308, 1}, // 14: optTwo_percents (TWO_PERCENTS) -> TWO_PERCENTS {308, 0}, // 15: optTwo_percents -> {316, 1}, // 16: _baseclass_header (BASECLASS_HEADER) -> BASECLASS_HEADER {317, 1}, // 17: _baseclass_preinclude (BASECLASS_PREINCLUDE) -> BASECLASS_PREINCLUDE {318, 1}, // 18: _class_header (CLASS_HEADER) -> CLASS_HEADER {319, 1}, // 19: _class_name (CLASS_NAME) -> CLASS_NAME {320, 1}, // 20: _expect (EXPECT) -> EXPECT {321, 1}, // 21: _filenames (FILENAMES) -> FILENAMES {322, 1}, // 22: _implementation_header (IMPLEMENTATION_HEADER) -> IMPLEMENTATION_HEADER {323, 0}, // 23: _incrementPrecedence -> {324, 2}, // 24: _left (LEFT) -> LEFT _typesymbol {326, 1}, // 25: _locationstruct (LOCATIONSTRUCT) -> LOCATIONSTRUCT {327, 1}, // 26: _ltype (LTYPE) -> LTYPE {328, 1}, // 27: _namespace (NAMESPACE) -> NAMESPACE {329, 2}, // 28: _nonassoc (NONASSOC) -> NONASSOC _typesymbol {330, 1}, // 29: _parsefun_source (PARSEFUN_SOURCE) -> PARSEFUN_SOURCE {331, 0}, // 30: _pushPrecedence -> {332, 1}, // 31: _required (REQUIRED) -> REQUIRED {333, 2}, // 32: _right (RIGHT) -> RIGHT _typesymbol {334, 1}, // 33: _scanner (SCANNER) -> SCANNER {335, 1}, // 34: _scanner_class_name (SCANNER_CLASS_NAME) -> SCANNER_CLASS_NAME {336, 1}, // 35: _scanner_token_function (SCANNER_TOKEN_FUNCTION) -> SCANNER_TOKEN_FUNCTION {337, 1}, // 36: _scanner_matched_text_function (SCANNER_MATCHED_TEXT_FUNCTION) -> SCANNER_MATCHED_TEXT_FUNCTION {338, 1}, // 37: _start (START) -> START {339, 0}, // 38: _symbol_exp -> {340, 1}, // 39: _symbol (QUOTE) -> QUOTE {340, 2}, // 40: _symbol -> identifier optNumber {341, 3}, // 41: _symbolList -> _symbolList optComma _symbol {341, 1}, // 42: _symbolList -> _symbol {342, 3}, // 43: _symbols -> _symbol_exp _symbolList optSemiCol {343, 1}, // 44: _target_directory (TARGET_DIRECTORY) -> TARGET_DIRECTORY {344, 1}, // 45: _type (TYPE) -> TYPE {345, 1}, // 46: _stype (STYPE) -> STYPE {325, 0}, // 47: _typesymbol -> {346, 2}, // 48: _token (TOKEN) -> TOKEN _typesymbol {347, 1}, // 49: _union (UNION) -> UNION {348, 1}, // 50: _polymorphic (POLYMORPHIC) -> POLYMORPHIC {349, 1}, // 51: _typespec (':') -> ':' {350, 3}, // 52: _polyspec -> identifier _typespec identifier {351, 3}, // 53: _polyspecs (';') -> _polyspecs ';' _polyspec {351, 1}, // 54: _polyspecs -> _polyspec {352, 2}, // 55: _directiveSpec (STRING) -> _baseclass_header STRING {352, 2}, // 56: _directiveSpec (STRING) -> _baseclass_preinclude STRING {352, 2}, // 57: _directiveSpec (STRING) -> _class_header STRING {352, 2}, // 58: _directiveSpec (IDENTIFIER) -> _class_name IDENTIFIER {352, 1}, // 59: _directiveSpec (DEBUGFLAG) -> DEBUGFLAG {352, 1}, // 60: _directiveSpec (ERROR_VERBOSE) -> ERROR_VERBOSE {352, 2}, // 61: _directiveSpec (NUMBER) -> _expect NUMBER {352, 2}, // 62: _directiveSpec (STRING) -> _filenames STRING {352, 1}, // 63: _directiveSpec (FLEX) -> FLEX {352, 2}, // 64: _directiveSpec (STRING) -> _implementation_header STRING {352, 4}, // 65: _directiveSpec -> _left _incrementPrecedence optTypename _symbols {352, 3}, // 66: _directiveSpec (BLOCK) -> _locationstruct BLOCK optSemiCol {352, 1}, // 67: _directiveSpec (LSP_NEEDED) -> LSP_NEEDED {352, 2}, // 68: _directiveSpec (STRING) -> _ltype STRING {352, 2}, // 69: _directiveSpec (IDENTIFIER) -> _namespace IDENTIFIER {352, 1}, // 70: _directiveSpec (NEG_DOLLAR) -> NEG_DOLLAR {352, 1}, // 71: _directiveSpec (NOLINES) -> NOLINES {352, 4}, // 72: _directiveSpec -> _nonassoc _incrementPrecedence optTypename _symbols {352, 2}, // 73: _directiveSpec (STRING) -> _parsefun_source STRING {352, 1}, // 74: _directiveSpec (PRINT_TOKENS) -> PRINT_TOKENS {352, 2}, // 75: _directiveSpec (NUMBER) -> _required NUMBER {352, 4}, // 76: _directiveSpec -> _right _incrementPrecedence optTypename _symbols {352, 2}, // 77: _directiveSpec (STRING) -> _scanner STRING {352, 2}, // 78: _directiveSpec (STRING) -> _scanner_class_name STRING {352, 2}, // 79: _directiveSpec (STRING) -> _scanner_token_function STRING {352, 2}, // 80: _directiveSpec (STRING) -> _scanner_matched_text_function STRING {352, 2}, // 81: _directiveSpec (IDENTIFIER) -> _start IDENTIFIER {352, 2}, // 82: _directiveSpec (STRING) -> _stype STRING {352, 2}, // 83: _directiveSpec (STRING) -> _target_directory STRING {352, 4}, // 84: _directiveSpec -> _token optTypename _pushPrecedence _symbols {352, 3}, // 85: _directiveSpec -> _type typename _symbols {352, 3}, // 86: _directiveSpec (BLOCK) -> _union BLOCK optSemiCol {352, 3}, // 87: _directiveSpec -> _polymorphic _polyspecs optSemiCol {352, 1}, // 88: _directiveSpec (WEAK_TAGS) -> WEAK_TAGS {352, 1}, // 89: _directiveSpec (_error_) -> _error_ {353, 1}, // 90: _directive -> _directiveSpec {305, 2}, // 91: directives -> directives _directive {305, 0}, // 92: directives -> {354, 1}, // 93: _precSpec (IDENTIFIER) -> IDENTIFIER {354, 1}, // 94: _precSpec (QUOTE) -> QUOTE {355, 1}, // 95: _productionElement (QUOTE) -> QUOTE {355, 1}, // 96: _productionElement (IDENTIFIER) -> IDENTIFIER {355, 1}, // 97: _productionElement (BLOCK) -> BLOCK {355, 2}, // 98: _productionElement (PREC) -> PREC _precSpec {356, 2}, // 99: _productionElements -> _productionElements _productionElement {356, 1}, // 100: _productionElements -> _productionElement {357, 1}, // 101: _production -> _productionElements {357, 0}, // 102: _production -> {358, 1}, // 103: _productionSeparator ('|') -> '|' {359, 3}, // 104: _productionList -> _productionList _productionSeparator _production {359, 1}, // 105: _productionList -> _production {360, 2}, // 106: _ruleName (':') -> identifier ':' {361, 3}, // 107: _rule (';') -> _ruleName _productionList ';' {307, 2}, // 108: rules -> rules _rule {307, 0}, // 109: rules -> {362, 1}, // 110: input_$ -> input }; // State info and SR__ transitions for each state. SR__ s_0[] = { { { DEF_RED}, { 3} }, { { 304}, { 1} }, // input { { 305}, { 2} }, // directives { { 0}, { -92} }, }; SR__ s_1[] = { { { REQ_TOKEN}, { 2} }, { { _EOF_}, { PARSE_ACCEPT} }, { { 0}, { 0} }, }; SR__ s_2[] = { { { ERR_REQ}, { 66} }, { { 306}, { 3} }, // _two_percents { { 353}, { 4} }, // _directive { { 294}, { 5} }, // TWO_PERCENTS { { 352}, { 6} }, // _directiveSpec { { 316}, { 7} }, // _baseclass_header { { 317}, { 8} }, // _baseclass_preinclude { { 318}, { 9} }, // _class_header { { 319}, { 10} }, // _class_name { { 262}, { 11} }, // DEBUGFLAG { { 263}, { 12} }, // ERROR_VERBOSE { { 320}, { 13} }, // _expect { { 321}, { 14} }, // _filenames { { 266}, { 15} }, // FLEX { { 322}, { 16} }, // _implementation_header { { 324}, { 17} }, // _left { { 326}, { 18} }, // _locationstruct { { 271}, { 19} }, // LSP_NEEDED { { 327}, { 20} }, // _ltype { { 328}, { 21} }, // _namespace { { 274}, { 22} }, // NEG_DOLLAR { { 275}, { 23} }, // NOLINES { { 329}, { 24} }, // _nonassoc { { 330}, { 25} }, // _parsefun_source { { 281}, { 26} }, // PRINT_TOKENS { { 332}, { 27} }, // _required { { 333}, { 28} }, // _right { { 334}, { 29} }, // _scanner { { 335}, { 30} }, // _scanner_class_name { { 336}, { 31} }, // _scanner_token_function { { 337}, { 32} }, // _scanner_matched_text_function { { 338}, { 33} }, // _start { { 345}, { 34} }, // _stype { { 343}, { 35} }, // _target_directory { { 346}, { 36} }, // _token { { 344}, { 37} }, // _type { { 347}, { 38} }, // _union { { 348}, { 39} }, // _polymorphic { { 297}, { 40} }, // WEAK_TAGS { { _error_}, { 41} }, // _error_ { { 257}, { 42} }, // BASECLASS_HEADER { { 258}, { 43} }, // BASECLASS_PREINCLUDE { { 260}, { 44} }, // CLASS_HEADER { { 261}, { 45} }, // CLASS_NAME { { 264}, { 46} }, // EXPECT { { 265}, { 47} }, // FILENAMES { { 268}, { 48} }, // IMPLEMENTATION_HEADER { { 269}, { 49} }, // LEFT { { 270}, { 50} }, // LOCATIONSTRUCT { { 272}, { 51} }, // LTYPE { { 273}, { 52} }, // NAMESPACE { { 276}, { 53} }, // NONASSOC { { 278}, { 54} }, // PARSEFUN_SOURCE { { 283}, { 55} }, // REQUIRED { { 284}, { 56} }, // RIGHT { { 285}, { 57} }, // SCANNER { { 286}, { 58} }, // SCANNER_CLASS_NAME { { 288}, { 59} }, // SCANNER_TOKEN_FUNCTION { { 287}, { 60} }, // SCANNER_MATCHED_TEXT_FUNCTION { { 289}, { 61} }, // START { { 291}, { 62} }, // STYPE { { 292}, { 63} }, // TARGET_DIRECTORY { { 293}, { 64} }, // TOKEN { { 295}, { 65} }, // TYPE { { 296}, { 66} }, // UNION { { 279}, { 67} }, // POLYMORPHIC { { 0}, { 0} }, }; SR__ s_3[] = { { { DEF_RED}, { 2} }, { { 307}, { 68} }, // rules { { 0}, { -109} }, }; SR__ s_4[] = { { { DEF_RED}, { 1} }, { { 0}, { -91} }, }; SR__ s_5[] = { { { DEF_RED}, { 1} }, { { 0}, { -2} }, }; SR__ s_6[] = { { { DEF_RED}, { 1} }, { { 0}, { -90} }, }; SR__ s_7[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 69} }, // STRING { { 0}, { 0} }, }; SR__ s_8[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 70} }, // STRING { { 0}, { 0} }, }; SR__ s_9[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 71} }, // STRING { { 0}, { 0} }, }; SR__ s_10[] = { { { REQ_TOKEN}, { 2} }, { { 267}, { 72} }, // IDENTIFIER { { 0}, { 0} }, }; SR__ s_11[] = { { { DEF_RED}, { 1} }, { { 0}, { -59} }, }; SR__ s_12[] = { { { DEF_RED}, { 1} }, { { 0}, { -60} }, }; SR__ s_13[] = { { { REQ_TOKEN}, { 2} }, { { 277}, { 73} }, // NUMBER { { 0}, { 0} }, }; SR__ s_14[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 74} }, // STRING { { 0}, { 0} }, }; SR__ s_15[] = { { { DEF_RED}, { 1} }, { { 0}, { -63} }, }; SR__ s_16[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 75} }, // STRING { { 0}, { 0} }, }; SR__ s_17[] = { { { DEF_RED}, { 2} }, { { 323}, { 76} }, // _incrementPrecedence { { 0}, { -23} }, }; SR__ s_18[] = { { { REQ_TOKEN}, { 2} }, { { 259}, { 77} }, // BLOCK { { 0}, { 0} }, }; SR__ s_19[] = { { { DEF_RED}, { 1} }, { { 0}, { -67} }, }; SR__ s_20[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 78} }, // STRING { { 0}, { 0} }, }; SR__ s_21[] = { { { REQ_TOKEN}, { 2} }, { { 267}, { 79} }, // IDENTIFIER { { 0}, { 0} }, }; SR__ s_22[] = { { { DEF_RED}, { 1} }, { { 0}, { -70} }, }; SR__ s_23[] = { { { DEF_RED}, { 1} }, { { 0}, { -71} }, }; SR__ s_24[] = { { { DEF_RED}, { 2} }, { { 323}, { 80} }, // _incrementPrecedence { { 0}, { -23} }, }; SR__ s_25[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 81} }, // STRING { { 0}, { 0} }, }; SR__ s_26[] = { { { DEF_RED}, { 1} }, { { 0}, { -74} }, }; SR__ s_27[] = { { { REQ_TOKEN}, { 2} }, { { 277}, { 82} }, // NUMBER { { 0}, { 0} }, }; SR__ s_28[] = { { { DEF_RED}, { 2} }, { { 323}, { 83} }, // _incrementPrecedence { { 0}, { -23} }, }; SR__ s_29[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 84} }, // STRING { { 0}, { 0} }, }; SR__ s_30[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 85} }, // STRING { { 0}, { 0} }, }; SR__ s_31[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 86} }, // STRING { { 0}, { 0} }, }; SR__ s_32[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 87} }, // STRING { { 0}, { 0} }, }; SR__ s_33[] = { { { REQ_TOKEN}, { 2} }, { { 267}, { 88} }, // IDENTIFIER { { 0}, { 0} }, }; SR__ s_34[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 89} }, // STRING { { 0}, { 0} }, }; SR__ s_35[] = { { { REQ_TOKEN}, { 2} }, { { 290}, { 90} }, // STRING { { 0}, { 0} }, }; SR__ s_36[] = { { { REQ_DEF}, { 5} }, { { 315}, { 91} }, // optTypename { { 310}, { 92} }, // typename { { 314}, { 93} }, // _tokenname { { 60}, { 94} }, // '<' { { 0}, { -11} }, }; SR__ s_37[] = { { { REQ_TOKEN}, { 3} }, { { 310}, { 95} }, // typename { { 60}, { 94} }, // '<' { { 0}, { 0} }, }; SR__ s_38[] = { { { REQ_TOKEN}, { 2} }, { { 259}, { 96} }, // BLOCK { { 0}, { 0} }, }; SR__ s_39[] = { { { REQ_TOKEN}, { 5} }, { { 351}, { 97} }, // _polyspecs { { 350}, { 98} }, // _polyspec { { 309}, { 99} }, // identifier { { 267}, { 100} }, // IDENTIFIER { { 0}, { 0} }, }; SR__ s_40[] = { { { DEF_RED}, { 1} }, { { 0}, { -88} }, }; SR__ s_41[] = { { { DEF_RED}, { 1} }, { { 0}, { -89} }, }; SR__ s_42[] = { { { DEF_RED}, { 1} }, { { 0}, { -16} }, }; SR__ s_43[] = { { { DEF_RED}, { 1} }, { { 0}, { -17} }, }; SR__ s_44[] = { { { DEF_RED}, { 1} }, { { 0}, { -18} }, }; SR__ s_45[] = { { { DEF_RED}, { 1} }, { { 0}, { -19} }, }; SR__ s_46[] = { { { DEF_RED}, { 1} }, { { 0}, { -20} }, }; SR__ s_47[] = { { { DEF_RED}, { 1} }, { { 0}, { -21} }, }; SR__ s_48[] = { { { DEF_RED}, { 1} }, { { 0}, { -22} }, }; SR__ s_49[] = { { { DEF_RED}, { 2} }, { { 325}, { 101} }, // _typesymbol { { 0}, { -47} }, }; SR__ s_50[] = { { { DEF_RED}, { 1} }, { { 0}, { -25} }, }; SR__ s_51[] = { { { DEF_RED}, { 1} }, { { 0}, { -26} }, }; SR__ s_52[] = { { { DEF_RED}, { 1} }, { { 0}, { -27} }, }; SR__ s_53[] = { { { DEF_RED}, { 2} }, { { 325}, { 102} }, // _typesymbol { { 0}, { -47} }, }; SR__ s_54[] = { { { DEF_RED}, { 1} }, { { 0}, { -29} }, }; SR__ s_55[] = { { { DEF_RED}, { 1} }, { { 0}, { -31} }, }; SR__ s_56[] = { { { DEF_RED}, { 2} }, { { 325}, { 103} }, // _typesymbol { { 0}, { -47} }, }; SR__ s_57[] = { { { DEF_RED}, { 1} }, { { 0}, { -33} }, }; SR__ s_58[] = { { { DEF_RED}, { 1} }, { { 0}, { -34} }, }; SR__ s_59[] = { { { DEF_RED}, { 1} }, { { 0}, { -35} }, }; SR__ s_60[] = { { { DEF_RED}, { 1} }, { { 0}, { -36} }, }; SR__ s_61[] = { { { DEF_RED}, { 1} }, { { 0}, { -37} }, }; SR__ s_62[] = { { { DEF_RED}, { 1} }, { { 0}, { -46} }, }; SR__ s_63[] = { { { DEF_RED}, { 1} }, { { 0}, { -44} }, }; SR__ s_64[] = { { { DEF_RED}, { 2} }, { { 325}, { 104} }, // _typesymbol { { 0}, { -47} }, }; SR__ s_65[] = { { { DEF_RED}, { 1} }, { { 0}, { -45} }, }; SR__ s_66[] = { { { DEF_RED}, { 1} }, { { 0}, { -49} }, }; SR__ s_67[] = { { { DEF_RED}, { 1} }, { { 0}, { -50} }, }; SR__ s_68[] = { { { REQ_DEF}, { 7} }, { { 308}, { 105} }, // optTwo_percents { { 361}, { 106} }, // _rule { { 294}, { 107} }, // TWO_PERCENTS { { 360}, { 108} }, // _ruleName { { 309}, { 109} }, // identifier { { 267}, { 100} }, // IDENTIFIER { { 0}, { -15} }, }; SR__ s_69[] = { { { DEF_RED}, { 1} }, { { 0}, { -55} }, }; SR__ s_70[] = { { { DEF_RED}, { 1} }, { { 0}, { -56} }, }; SR__ s_71[] = { { { DEF_RED}, { 1} }, { { 0}, { -57} }, }; SR__ s_72[] = { { { DEF_RED}, { 1} }, { { 0}, { -58} }, }; SR__ s_73[] = { { { DEF_RED}, { 1} }, { { 0}, { -61} }, }; SR__ s_74[] = { { { DEF_RED}, { 1} }, { { 0}, { -62} }, }; SR__ s_75[] = { { { DEF_RED}, { 1} }, { { 0}, { -64} }, }; SR__ s_76[] = { { { REQ_DEF}, { 5} }, { { 315}, { 110} }, // optTypename { { 310}, { 92} }, // typename { { 314}, { 93} }, // _tokenname { { 60}, { 94} }, // '<' { { 0}, { -11} }, }; SR__ s_77[] = { { { REQ_DEF}, { 3} }, { { 313}, { 111} }, // optSemiCol { { 59}, { 112} }, // ';' { { 0}, { -10} }, }; SR__ s_78[] = { { { DEF_RED}, { 1} }, { { 0}, { -68} }, }; SR__ s_79[] = { { { DEF_RED}, { 1} }, { { 0}, { -69} }, }; SR__ s_80[] = { { { REQ_DEF}, { 5} }, { { 315}, { 113} }, // optTypename { { 310}, { 92} }, // typename { { 314}, { 93} }, // _tokenname { { 60}, { 94} }, // '<' { { 0}, { -11} }, }; SR__ s_81[] = { { { DEF_RED}, { 1} }, { { 0}, { -73} }, }; SR__ s_82[] = { { { DEF_RED}, { 1} }, { { 0}, { -75} }, }; SR__ s_83[] = { { { REQ_DEF}, { 5} }, { { 315}, { 114} }, // optTypename { { 310}, { 92} }, // typename { { 314}, { 93} }, // _tokenname { { 60}, { 94} }, // '<' { { 0}, { -11} }, }; SR__ s_84[] = { { { DEF_RED}, { 1} }, { { 0}, { -77} }, }; SR__ s_85[] = { { { DEF_RED}, { 1} }, { { 0}, { -78} }, }; SR__ s_86[] = { { { DEF_RED}, { 1} }, { { 0}, { -79} }, }; SR__ s_87[] = { { { DEF_RED}, { 1} }, { { 0}, { -80} }, }; SR__ s_88[] = { { { DEF_RED}, { 1} }, { { 0}, { -81} }, }; SR__ s_89[] = { { { DEF_RED}, { 1} }, { { 0}, { -82} }, }; SR__ s_90[] = { { { DEF_RED}, { 1} }, { { 0}, { -83} }, }; SR__ s_91[] = { { { DEF_RED}, { 2} }, { { 331}, { 115} }, // _pushPrecedence { { 0}, { -30} }, }; SR__ s_92[] = { { { DEF_RED}, { 2} }, { { 314}, { 116} }, // _tokenname { { 0}, { -11} }, }; SR__ s_93[] = { { { DEF_RED}, { 1} }, { { 0}, { -13} }, }; SR__ s_94[] = { { { REQ_TOKEN}, { 3} }, { { 309}, { 117} }, // identifier { { 267}, { 100} }, // IDENTIFIER { { 0}, { 0} }, }; SR__ s_95[] = { { { DEF_RED}, { 3} }, { { 342}, { 118} }, // _symbols { { 339}, { 119} }, // _symbol_exp { { 0}, { -38} }, }; SR__ s_96[] = { { { REQ_DEF}, { 3} }, { { 313}, { 120} }, // optSemiCol { { 59}, { 112} }, // ';' { { 0}, { -10} }, }; SR__ s_97[] = { { { REQ_DEF}, { 3} }, { { 313}, { 121} }, // optSemiCol { { 59}, { 122} }, // ';' { { 0}, { -10} }, }; SR__ s_98[] = { { { DEF_RED}, { 1} }, { { 0}, { -54} }, }; SR__ s_99[] = { { { REQ_TOKEN}, { 3} }, { { 349}, { 123} }, // _typespec { { 58}, { 124} }, // ':' { { 0}, { 0} }, }; SR__ s_100[] = { { { DEF_RED}, { 1} }, { { 0}, { -3} }, }; SR__ s_101[] = { { { DEF_RED}, { 1} }, { { 0}, { -24} }, }; SR__ s_102[] = { { { DEF_RED}, { 1} }, { { 0}, { -28} }, }; SR__ s_103[] = { { { DEF_RED}, { 1} }, { { 0}, { -32} }, }; SR__ s_104[] = { { { DEF_RED}, { 1} }, { { 0}, { -48} }, }; SR__ s_105[] = { { { DEF_RED}, { 1} }, { { 0}, { -1} }, }; SR__ s_106[] = { { { DEF_RED}, { 1} }, { { 0}, { -108} }, }; SR__ s_107[] = { { { DEF_RED}, { 1} }, { { 0}, { -14} }, }; SR__ s_108[] = { { { REQ_DEF}, { 9} }, { { 359}, { 125} }, // _productionList { { 357}, { 126} }, // _production { { 356}, { 127} }, // _productionElements { { 355}, { 128} }, // _productionElement { { 282}, { 129} }, // QUOTE { { 267}, { 130} }, // IDENTIFIER { { 259}, { 131} }, // BLOCK { { 280}, { 132} }, // PREC { { 0}, { -102} }, }; SR__ s_109[] = { { { REQ_TOKEN}, { 2} }, { { 58}, { 133} }, // ':' { { 0}, { 0} }, }; SR__ s_110[] = { { { DEF_RED}, { 3} }, { { 342}, { 134} }, // _symbols { { 339}, { 119} }, // _symbol_exp { { 0}, { -38} }, }; SR__ s_111[] = { { { DEF_RED}, { 1} }, { { 0}, { -66} }, }; SR__ s_112[] = { { { DEF_RED}, { 1} }, { { 0}, { -9} }, }; SR__ s_113[] = { { { DEF_RED}, { 3} }, { { 342}, { 135} }, // _symbols { { 339}, { 119} }, // _symbol_exp { { 0}, { -38} }, }; SR__ s_114[] = { { { DEF_RED}, { 3} }, { { 342}, { 136} }, // _symbols { { 339}, { 119} }, // _symbol_exp { { 0}, { -38} }, }; SR__ s_115[] = { { { DEF_RED}, { 3} }, { { 342}, { 137} }, // _symbols { { 339}, { 119} }, // _symbol_exp { { 0}, { -38} }, }; SR__ s_116[] = { { { DEF_RED}, { 1} }, { { 0}, { -12} }, }; SR__ s_117[] = { { { REQ_TOKEN}, { 2} }, { { 62}, { 138} }, // '>' { { 0}, { 0} }, }; SR__ s_118[] = { { { DEF_RED}, { 1} }, { { 0}, { -85} }, }; SR__ s_119[] = { { { REQ_TOKEN}, { 6} }, { { 341}, { 139} }, // _symbolList { { 340}, { 140} }, // _symbol { { 282}, { 141} }, // QUOTE { { 309}, { 142} }, // identifier { { 267}, { 100} }, // IDENTIFIER { { 0}, { 0} }, }; SR__ s_120[] = { { { DEF_RED}, { 1} }, { { 0}, { -86} }, }; SR__ s_121[] = { { { DEF_RED}, { 1} }, { { 0}, { -87} }, }; SR__ s_122[] = { { { REQ_DEF}, { 4} }, { { 350}, { 143} }, // _polyspec { { 309}, { 99} }, // identifier { { 267}, { 100} }, // IDENTIFIER { { 0}, { -9} }, }; SR__ s_123[] = { { { REQ_TOKEN}, { 3} }, { { 309}, { 144} }, // identifier { { 267}, { 100} }, // IDENTIFIER { { 0}, { 0} }, }; SR__ s_124[] = { { { DEF_RED}, { 1} }, { { 0}, { -51} }, }; SR__ s_125[] = { { { REQ_TOKEN}, { 4} }, { { 59}, { 145} }, // ';' { { 358}, { 146} }, // _productionSeparator { { 124}, { 147} }, // '|' { { 0}, { 0} }, }; SR__ s_126[] = { { { DEF_RED}, { 1} }, { { 0}, { -105} }, }; SR__ s_127[] = { { { REQ_DEF}, { 6} }, { { 355}, { 148} }, // _productionElement { { 282}, { 129} }, // QUOTE { { 267}, { 130} }, // IDENTIFIER { { 259}, { 131} }, // BLOCK { { 280}, { 132} }, // PREC { { 0}, { -101} }, }; SR__ s_128[] = { { { DEF_RED}, { 1} }, { { 0}, { -100} }, }; SR__ s_129[] = { { { DEF_RED}, { 1} }, { { 0}, { -95} }, }; SR__ s_130[] = { { { DEF_RED}, { 1} }, { { 0}, { -96} }, }; SR__ s_131[] = { { { DEF_RED}, { 1} }, { { 0}, { -97} }, }; SR__ s_132[] = { { { REQ_TOKEN}, { 4} }, { { 354}, { 149} }, // _precSpec { { 267}, { 150} }, // IDENTIFIER { { 282}, { 151} }, // QUOTE { { 0}, { 0} }, }; SR__ s_133[] = { { { DEF_RED}, { 1} }, { { 0}, { -106} }, }; SR__ s_134[] = { { { DEF_RED}, { 1} }, { { 0}, { -65} }, }; SR__ s_135[] = { { { DEF_RED}, { 1} }, { { 0}, { -72} }, }; SR__ s_136[] = { { { DEF_RED}, { 1} }, { { 0}, { -76} }, }; SR__ s_137[] = { { { DEF_RED}, { 1} }, { { 0}, { -84} }, }; SR__ s_138[] = { { { DEF_RED}, { 1} }, { { 0}, { -4} }, }; SR__ s_139[] = { { { REQ_DEF}, { 41} }, { { 313}, { 152} }, // optSemiCol { { 311}, { 153} }, // optComma { { 59}, { 112} }, // ';' { { 44}, { 154} }, // ',' { { _error_}, { -10} }, // _error_ { { 257}, { -10} }, // BASECLASS_HEADER { { 258}, { -10} }, // BASECLASS_PREINCLUDE { { 260}, { -10} }, // CLASS_HEADER { { 261}, { -10} }, // CLASS_NAME { { 262}, { -10} }, // DEBUGFLAG { { 263}, { -10} }, // ERROR_VERBOSE { { 264}, { -10} }, // EXPECT { { 265}, { -10} }, // FILENAMES { { 266}, { -10} }, // FLEX { { 268}, { -10} }, // IMPLEMENTATION_HEADER { { 269}, { -10} }, // LEFT { { 270}, { -10} }, // LOCATIONSTRUCT { { 271}, { -10} }, // LSP_NEEDED { { 272}, { -10} }, // LTYPE { { 273}, { -10} }, // NAMESPACE { { 274}, { -10} }, // NEG_DOLLAR { { 275}, { -10} }, // NOLINES { { 276}, { -10} }, // NONASSOC { { 278}, { -10} }, // PARSEFUN_SOURCE { { 279}, { -10} }, // POLYMORPHIC { { 281}, { -10} }, // PRINT_TOKENS { { 283}, { -10} }, // REQUIRED { { 284}, { -10} }, // RIGHT { { 285}, { -10} }, // SCANNER { { 286}, { -10} }, // SCANNER_CLASS_NAME { { 287}, { -10} }, // SCANNER_MATCHED_TEXT_FUNCTION { { 288}, { -10} }, // SCANNER_TOKEN_FUNCTION { { 289}, { -10} }, // START { { 291}, { -10} }, // STYPE { { 292}, { -10} }, // TARGET_DIRECTORY { { 293}, { -10} }, // TOKEN { { 294}, { -10} }, // TWO_PERCENTS { { 295}, { -10} }, // TYPE { { 296}, { -10} }, // UNION { { 297}, { -10} }, // WEAK_TAGS { { 0}, { -6} }, }; SR__ s_140[] = { { { DEF_RED}, { 1} }, { { 0}, { -42} }, }; SR__ s_141[] = { { { DEF_RED}, { 1} }, { { 0}, { -39} }, }; SR__ s_142[] = { { { REQ_DEF}, { 3} }, { { 312}, { 155} }, // optNumber { { 277}, { 156} }, // NUMBER { { 0}, { -8} }, }; SR__ s_143[] = { { { DEF_RED}, { 1} }, { { 0}, { -53} }, }; SR__ s_144[] = { { { DEF_RED}, { 1} }, { { 0}, { -52} }, }; SR__ s_145[] = { { { DEF_RED}, { 1} }, { { 0}, { -107} }, }; SR__ s_146[] = { { { REQ_DEF}, { 8} }, { { 357}, { 157} }, // _production { { 356}, { 127} }, // _productionElements { { 355}, { 128} }, // _productionElement { { 282}, { 129} }, // QUOTE { { 267}, { 130} }, // IDENTIFIER { { 259}, { 131} }, // BLOCK { { 280}, { 132} }, // PREC { { 0}, { -102} }, }; SR__ s_147[] = { { { DEF_RED}, { 1} }, { { 0}, { -103} }, }; SR__ s_148[] = { { { DEF_RED}, { 1} }, { { 0}, { -99} }, }; SR__ s_149[] = { { { DEF_RED}, { 1} }, { { 0}, { -98} }, }; SR__ s_150[] = { { { DEF_RED}, { 1} }, { { 0}, { -93} }, }; SR__ s_151[] = { { { DEF_RED}, { 1} }, { { 0}, { -94} }, }; SR__ s_152[] = { { { DEF_RED}, { 1} }, { { 0}, { -43} }, }; SR__ s_153[] = { { { REQ_TOKEN}, { 5} }, { { 340}, { 158} }, // _symbol { { 282}, { 141} }, // QUOTE { { 309}, { 142} }, // identifier { { 267}, { 100} }, // IDENTIFIER { { 0}, { 0} }, }; SR__ s_154[] = { { { DEF_RED}, { 1} }, { { 0}, { -5} }, }; SR__ s_155[] = { { { DEF_RED}, { 1} }, { { 0}, { -40} }, }; SR__ s_156[] = { { { DEF_RED}, { 1} }, { { 0}, { -7} }, }; SR__ s_157[] = { { { DEF_RED}, { 1} }, { { 0}, { -104} }, }; SR__ s_158[] = { { { DEF_RED}, { 1} }, { { 0}, { -41} }, }; // State array: SR__ *s_state[] = { s_0, s_1, s_2, s_3, s_4, s_5, s_6, s_7, s_8, s_9, s_10, s_11, s_12, s_13, s_14, s_15, s_16, s_17, s_18, s_19, s_20, s_21, s_22, s_23, s_24, s_25, s_26, s_27, s_28, s_29, s_30, s_31, s_32, s_33, s_34, s_35, s_36, s_37, s_38, s_39, s_40, s_41, s_42, s_43, s_44, s_45, s_46, s_47, s_48, s_49, s_50, s_51, s_52, s_53, s_54, s_55, s_56, s_57, s_58, s_59, s_60, s_61, s_62, s_63, s_64, s_65, s_66, s_67, s_68, s_69, s_70, s_71, s_72, s_73, s_74, s_75, s_76, s_77, s_78, s_79, s_80, s_81, s_82, s_83, s_84, s_85, s_86, s_87, s_88, s_89, s_90, s_91, s_92, s_93, s_94, s_95, s_96, s_97, s_98, s_99, s_100, s_101, s_102, s_103, s_104, s_105, s_106, s_107, s_108, s_109, s_110, s_111, s_112, s_113, s_114, s_115, s_116, s_117, s_118, s_119, s_120, s_121, s_122, s_123, s_124, s_125, s_126, s_127, s_128, s_129, s_130, s_131, s_132, s_133, s_134, s_135, s_136, s_137, s_138, s_139, s_140, s_141, s_142, s_143, s_144, s_145, s_146, s_147, s_148, s_149, s_150, s_151, s_152, s_153, s_154, s_155, s_156, s_157, s_158, }; typedef std::unordered_map SMap; typedef SMap::value_type SMapVal; SMapVal s_symArr[] = { SMapVal(-2, "_UNDETERMINED_"), // predefined symbols SMapVal(-1, "_EOF_"), SMapVal(256, "_error_"), SMapVal(257, "BASECLASS_HEADER"), SMapVal(258, "BASECLASS_PREINCLUDE"), SMapVal(259, "BLOCK"), SMapVal(260, "CLASS_HEADER"), SMapVal(261, "CLASS_NAME"), SMapVal(262, "DEBUGFLAG"), SMapVal(263, "ERROR_VERBOSE"), SMapVal(264, "EXPECT"), SMapVal(265, "FILENAMES"), SMapVal(266, "FLEX"), SMapVal(267, "IDENTIFIER"), SMapVal(268, "IMPLEMENTATION_HEADER"), SMapVal(269, "LEFT"), SMapVal(270, "LOCATIONSTRUCT"), SMapVal(271, "LSP_NEEDED"), SMapVal(272, "LTYPE"), SMapVal(273, "NAMESPACE"), SMapVal(274, "NEG_DOLLAR"), SMapVal(275, "NOLINES"), SMapVal(276, "NONASSOC"), SMapVal(277, "NUMBER"), SMapVal(278, "PARSEFUN_SOURCE"), SMapVal(279, "POLYMORPHIC"), SMapVal(280, "PREC"), SMapVal(281, "PRINT_TOKENS"), SMapVal(282, "QUOTE"), SMapVal(283, "REQUIRED"), SMapVal(284, "RIGHT"), SMapVal(285, "SCANNER"), SMapVal(286, "SCANNER_CLASS_NAME"), SMapVal(287, "SCANNER_MATCHED_TEXT_FUNCTION"), SMapVal(288, "SCANNER_TOKEN_FUNCTION"), SMapVal(289, "START"), SMapVal(290, "STRING"), SMapVal(291, "STYPE"), SMapVal(292, "TARGET_DIRECTORY"), SMapVal(293, "TOKEN"), SMapVal(294, "TWO_PERCENTS"), SMapVal(295, "TYPE"), SMapVal(296, "UNION"), SMapVal(297, "WEAK_TAGS"), SMapVal(304, "input"), SMapVal(305, "directives"), SMapVal(306, "_two_percents"), SMapVal(307, "rules"), SMapVal(308, "optTwo_percents"), SMapVal(309, "identifier"), SMapVal(310, "typename"), SMapVal(311, "optComma"), SMapVal(312, "optNumber"), SMapVal(313, "optSemiCol"), SMapVal(314, "_tokenname"), SMapVal(315, "optTypename"), SMapVal(316, "_baseclass_header"), SMapVal(317, "_baseclass_preinclude"), SMapVal(318, "_class_header"), SMapVal(319, "_class_name"), SMapVal(320, "_expect"), SMapVal(321, "_filenames"), SMapVal(322, "_implementation_header"), SMapVal(323, "_incrementPrecedence"), SMapVal(324, "_left"), SMapVal(325, "_typesymbol"), SMapVal(326, "_locationstruct"), SMapVal(327, "_ltype"), SMapVal(328, "_namespace"), SMapVal(329, "_nonassoc"), SMapVal(330, "_parsefun_source"), SMapVal(331, "_pushPrecedence"), SMapVal(332, "_required"), SMapVal(333, "_right"), SMapVal(334, "_scanner"), SMapVal(335, "_scanner_class_name"), SMapVal(336, "_scanner_token_function"), SMapVal(337, "_scanner_matched_text_function"), SMapVal(338, "_start"), SMapVal(339, "_symbol_exp"), SMapVal(340, "_symbol"), SMapVal(341, "_symbolList"), SMapVal(342, "_symbols"), SMapVal(343, "_target_directory"), SMapVal(344, "_type"), SMapVal(345, "_stype"), SMapVal(346, "_token"), SMapVal(347, "_union"), SMapVal(348, "_polymorphic"), SMapVal(349, "_typespec"), SMapVal(350, "_polyspec"), SMapVal(351, "_polyspecs"), SMapVal(352, "_directiveSpec"), SMapVal(353, "_directive"), SMapVal(354, "_precSpec"), SMapVal(355, "_productionElement"), SMapVal(356, "_productionElements"), SMapVal(357, "_production"), SMapVal(358, "_productionSeparator"), SMapVal(359, "_productionList"), SMapVal(360, "_ruleName"), SMapVal(361, "_rule"), SMapVal(362, "input_$"), }; SMap s_symbol ( s_symArr, s_symArr + sizeof(s_symArr) / sizeof(SMapVal) ); } // anonymous namespace ends // If the parsing function call uses arguments, then provide an overloaded // function. The code below doesn't rely on parameters, so no arguments are // required. Furthermore, parse uses a function try block to allow us to do // ACCEPT and ABORT from anywhere, even from within members called by actions, // simply throwing the appropriate exceptions. ParserBase::ParserBase() : d_stackIdx__(-1), // $insert debuginit d_debug__(true), d_nErrors__(0), // $insert requiredtokens d_requiredTokens__(0), d_acceptedTokens__(d_requiredTokens__), d_token__(_UNDETERMINED_), d_nextToken__(_UNDETERMINED_) {} // $insert debugfunctions std::ostringstream ParserBase::s_out__; std::ostream &ParserBase::dflush__(std::ostream &out) { std::ostringstream &s_out__ = dynamic_cast(out); std::cout << " " << s_out__.str() << std::flush; s_out__.clear(); s_out__.str(""); return out; } std::string ParserBase::stype__(char const *pre, STYPE__ const &semVal, char const *post) const { return ""; } std::string ParserBase::symbol__(int value) const { using namespace std; ostringstream ostr; SMap::const_iterator it = s_symbol.find(value); if (it != s_symbol.end()) ostr << '\'' << it->second << '\''; else if (isprint(value)) ostr << '`' << static_cast(value) << "' (" << value << ')'; else ostr << "'\\x" << setfill('0') << hex << setw(2) << value << '\''; return ostr.str(); } void Parser::print__() { // $insert print enum { _UNDETERMINED_ = -2 }; std::cout << "Token: " << symbol__(d_token__) << ", text: `"; if (d_token__ == _UNDETERMINED_) std::cout << "'\n"; else std::cout << d_scanner.matched() << "'\n"; } void ParserBase::clearin() { d_token__ = d_nextToken__ = _UNDETERMINED_; } void ParserBase::push__(size_t state) { if (static_cast(d_stackIdx__ + 1) == d_stateStack__.size()) { size_t newSize = d_stackIdx__ + STACK_EXPANSION; d_stateStack__.resize(newSize); d_valueStack__.resize(newSize); } ++d_stackIdx__; d_stateStack__[d_stackIdx__] = d_state__ = state; *(d_vsp__ = &d_valueStack__[d_stackIdx__]) = d_val__; // $insert debug if (d_debug__) s_out__ << "push(state " << state << stype__(", semantic TOS = ", d_val__, ")") << ')' << "\n" << dflush__; } void ParserBase::popToken__() { d_token__ = d_nextToken__; d_val__ = d_nextVal__; d_nextVal__ = STYPE__(); d_nextToken__ = _UNDETERMINED_; } void ParserBase::pushToken__(int token) { d_nextToken__ = d_token__; d_nextVal__ = d_val__; d_token__ = token; } void ParserBase::pop__(size_t count) { // $insert debug if (d_debug__) s_out__ << "pop(" << count << ") from stack having size " << (d_stackIdx__ + 1) << "\n" << dflush__; if (d_stackIdx__ < static_cast(count)) { // $insert debug if (d_debug__) s_out__ << "Terminating parse(): unrecoverable input error at token " << symbol__(d_token__) << "\n" << dflush__; ABORT(); } d_stackIdx__ -= count; d_state__ = d_stateStack__[d_stackIdx__]; d_vsp__ = &d_valueStack__[d_stackIdx__]; // $insert debug if (d_debug__) s_out__ << "pop(): next state: " << d_state__ << ", token: " << symbol__(d_token__) ; // $insert debug if (d_debug__) s_out__ << stype__("semantic: ", d_val__) << "\n" << dflush__; } inline size_t ParserBase::top__() const { return d_stateStack__[d_stackIdx__]; } void Parser::executeAction(int production) try { if (d_token__ != _UNDETERMINED_) pushToken__(d_token__); // save an already available token // $insert debug if (d_debug__) s_out__ << "executeAction(): of rule " << production ; // $insert debug if (d_debug__) s_out__ << stype__(", semantic [TOS]: ", d_val__) << " ..." << "\n" << dflush__; switch (production) { // $insert actioncases case 2: { expectRules(); } break; case 3: { d_val__.get() = d_matched; } break; case 4: { d_field = d_vsp__[-1].data(); } break; case 7: { d_val__.get() = true; } break; case 8: { d_val__.get() = false; } break; case 11: { d_expect = "token name"; } break; case 13: { d_field.clear(); } break; case 14: { wmsg << "Ignoring all input beyond the second %% token" << endl; ACCEPT(); } break; case 16: { d_expect = "baseclass header name"; } break; case 17: { d_expect = "baseclass pre-include name"; } break; case 18: { d_expect = "class header name"; } break; case 19: { d_expect = "class name"; } break; case 20: { d_expect = "number (of conflicts)"; } break; case 21: { d_expect = "generic name of files"; } break; case 22: { d_expect = "implementation header name"; } break; case 23: { Terminal::incrementPrecedence(); } break; case 24: { d_association = Terminal::LEFT; } break; case 25: { d_expect = "Location struct definition"; } break; case 26: { d_expect = "Location type specification"; } break; case 27: { d_expect = "Namespace identifier"; } break; case 28: { d_association = Terminal::NONASSOC; } break; case 29: { d_expect = "File name for the parse() member"; } break; case 30: { d_val__.get() = Terminal::sPrecedence(); Terminal::resetPrecedence(); } break; case 31: { d_expect = "Required number of tokens between errors"; } break; case 32: { d_association = Terminal::RIGHT; } break; case 33: { d_expect = "Path to the scanner header filename"; } break; case 34: { d_expect = "Name of the Scanner class"; } break; case 35: { d_expect = "Scanner function returning the next token"; } break; case 36: { d_expect = "Scanner function returning the matched text"; } break; case 37: { d_expect = "Start rule" ; } break; case 38: { d_expect = "identifier or character-constant"; } break; case 39: { defineTerminal(d_scanner.canonicalQuote(), Symbol::CHAR_TERMINAL); } break; case 40: { defineTokenName(d_vsp__[-1].data(), d_vsp__[0].data()); } break; case 44: { d_expect = "target directory"; } break; case 45: { d_expect = "type-name"; d_typeDirective = true; } break; case 46: { d_expect = "STYPE type name" ; } break; case 47: { d_expect = "opt. identifier(s) or char constant(s)"; } break; case 48: { d_association = Terminal::UNDEFINED; } break; case 49: { d_expect = "Semantic value union definition"; } break; case 50: { setPolymorphicDecl(); } break; case 51: { d_scanner.beginTypeSpec(); } break; case 52: { addPolymorphic(d_vsp__[-2].data(), d_vsp__[0].data()); } break; case 55: { d_options.setBaseClassHeader(); } break; case 56: { d_options.setPreInclude(); } break; case 57: { d_options.setClassHeader(); } break; case 58: { d_options.setClassName(); } break; case 59: { d_options.setDebug(); } break; case 60: { d_options.setErrorVerbose(); } break; case 61: { setExpectedConflicts(); } break; case 62: { d_options.setGenericFilename(); } break; case 63: { d_options.setFlex(); } break; case 64: { d_options.setImplementationHeader(); } break; case 66: { d_options.setLocationDecl(d_scanner.block().str()); } break; case 67: { d_options.setLspNeeded(); } break; case 68: { d_options.setLtype(); } break; case 69: { d_options.setNamespace(); } break; case 70: { setNegativeDollarIndices(); } break; case 71: { d_options.unsetLines(); } break; case 73: { d_options.setParsefunSource(); } break; case 74: { d_options.setPrintTokens(); } break; case 75: { d_options.setRequiredTokens(d_scanner.number()); } break; case 77: { d_options.setScannerInclude(); } break; case 78: { d_options.setScannerClassName(); } break; case 79: { d_options.setScannerTokenFunction(); } break; case 80: { d_options.setScannerMatchedTextFunction(); } break; case 81: { setStart(); } break; case 82: { d_options.setStype(); } break; case 83: { d_options.setTargetDirectory(); } break; case 84: { Terminal::set_sPrecedence(d_vsp__[-1].data()); } break; case 86: { setUnionDecl(); } break; case 88: { d_options.unsetStrongTags(); } break; case 90: { d_expect.erase(); d_typeDirective = false; } break; case 93: { d_val__.get() = static_cast(IDENTIFIER); } break; case 94: { d_val__.get() = static_cast(QUOTE); } break; case 95: { d_val__ = useTerminal(); } break; case 96: { d_val__ = useSymbol(); } break; case 97: { d_val__ = d_scanner.block(); } break; case 98: { setPrecedence(d_vsp__[0].data()); d_val__ = STYPE__(); } break; case 99: { d_val__ = handleProductionElements(d_vsp__[-1], d_vsp__[0]); } break; case 100: { d_val__ = d_vsp__[0]; } break; case 101: { handleProductionElement(d_vsp__[0]); } break; case 102: { checkEmptyBlocktype(); } break; case 103: { d_rules.addProduction(d_scanner.lineNr()); } break; case 106: { openRule(d_vsp__[-1].data()); } break; } // $insert debug if (d_debug__) s_out__ << "... action of rule " << production << " completed" ; // $insert debug if (d_debug__) s_out__ << stype__(", semantic: ", d_val__) << "\n" << dflush__; } catch (std::exception const &exc) { exceptionHandler__(exc); } inline void ParserBase::reduce__(PI__ const &pi) { d_token__ = pi.d_nonTerm; pop__(pi.d_size); // $insert debug if (d_debug__) s_out__ << "reduce(): by rule " << (&pi - s_productionInfo) ; // $insert debug if (d_debug__) s_out__ << " to N-terminal " << symbol__(d_token__) << stype__(", semantic = ", d_val__) << "\n" << dflush__; } // If d_token__ is _UNDETERMINED_ then if d_nextToken__ is _UNDETERMINED_ another // token is obtained from lex(). Then d_nextToken__ is assigned to d_token__. void Parser::nextToken() { if (d_token__ != _UNDETERMINED_) // no need for a token: got one return; // already if (d_nextToken__ != _UNDETERMINED_) { popToken__(); // consume pending token // $insert debug if (d_debug__) s_out__ << "nextToken(): popped " << symbol__(d_token__) << stype__(", semantic = ", d_val__) << "\n" << dflush__; } else { ++d_acceptedTokens__; // accept another token (see // errorRecover()) d_token__ = lex(); if (d_token__ <= 0) d_token__ = _EOF_; } print(); // $insert debug if (d_debug__) s_out__ << "nextToken(): using " << symbol__(d_token__) << stype__(", semantic = ", d_val__) << "\n" << dflush__; } // if the final transition is negative, then we should reduce by the rule // given by its positive value. Note that the `recovery' parameter is only // used with the --debug option int Parser::lookup(bool recovery) { // $insert threading SR__ *sr = s_state[d_state__]; // get the appropriate state-table int lastIdx = sr->d_lastIdx; // sentinel-index in the SR__ array SR__ *lastElementPtr = sr + lastIdx; lastElementPtr->d_token = d_token__; // set search-token SR__ *elementPtr = sr + 1; // start the search at s_xx[1] while (elementPtr->d_token != d_token__) ++elementPtr; if (elementPtr == lastElementPtr) // reached the last element { if (elementPtr->d_action < 0) // default reduction { // $insert debug if (d_debug__) s_out__ << "lookup(" << d_state__ << ", " << symbol__(d_token__) ; // $insert debug if (d_debug__) s_out__ << "): default reduction by rule " << -elementPtr->d_action << "\n" << dflush__; return elementPtr->d_action; } // $insert debug if (d_debug__) s_out__ << "lookup(" << d_state__ << ", " << symbol__(d_token__) << "): Not " ; // $insert debug if (d_debug__) s_out__ << "found. " << (recovery ? "Continue" : "Start") << " error recovery." << "\n" << dflush__; // No default reduction, so token not found, so error. throw UNEXPECTED_TOKEN__; } // not at the last element: inspect the nature of the action // (< 0: reduce, 0: ACCEPT, > 0: shift) int action = elementPtr->d_action; // $insert debuglookup if (d_debug__) { s_out__ << "lookup(" << d_state__ << ", " << symbol__(d_token__); if (action < 0) // a reduction was found s_out__ << "): reduce by rule " << -action; else if (action == 0) s_out__ << "): ACCEPT"; else s_out__ << "): shift " << action << " (" << symbol__(d_token__) << " processed)"; s_out__ << "\n" << dflush__; } return action; } // When an error has occurred, pop elements off the stack until the top // state has an error-item. If none is found, the default recovery // mode (which is to abort) is activated. // // If EOF is encountered without being appropriate for the current state, // then the error recovery will fall back to the default recovery mode. // (i.e., parsing terminates) void Parser::errorRecovery() try { if (d_acceptedTokens__ >= d_requiredTokens__)// only generate an error- { // message if enough tokens ++d_nErrors__; // were accepted. Otherwise error("Syntax error"); // simply skip input } // $insert debug if (d_debug__) s_out__ << "errorRecovery(): " << d_nErrors__ << " error(s) so far. State = " << top__() << "\n" << dflush__; // get the error state while (not (s_state[top__()][0].d_type & ERR_ITEM)) { // $insert debug if (d_debug__) s_out__ << "errorRecovery(): pop state " << top__() << "\n" << dflush__; pop__(); } // $insert debug if (d_debug__) s_out__ << "errorRecovery(): state " << top__() << " is an ERROR state" << "\n" << dflush__; // In the error state, lookup a token allowing us to proceed. // Continuation may be possible following multiple reductions, // but eventuall a shift will be used, requiring the retrieval of // a terminal token. If a retrieved token doesn't match, the catch below // will ensure the next token is requested in the while(true) block // implemented below: int lastToken = d_token__; // give the unexpected token a // chance to be processed // again. pushToken__(_error_); // specify _error_ as next token push__(lookup(true)); // push the error state d_token__ = lastToken; // reactivate the unexpected // token (we're now in an // ERROR state). bool gotToken = true; // the next token is a terminal while (true) { try { if (s_state[d_state__]->d_type & REQ_TOKEN) { gotToken = d_token__ == _UNDETERMINED_; nextToken(); // obtain next token } int action = lookup(true); if (action > 0) // push a new state { push__(action); popToken__(); // $insert debug if (d_debug__) s_out__ << "errorRecovery() SHIFT state " << action ; // $insert debug if (d_debug__) s_out__ << ", continue with " << symbol__(d_token__) << "\n" << dflush__; if (gotToken) { // $insert debug if (d_debug__) s_out__ << "errorRecovery() COMPLETED: next state " ; // $insert debug if (d_debug__) s_out__ << action << ", no token yet" << "\n" << dflush__; d_acceptedTokens__ = 0; return; } } else if (action < 0) { // no actions executed on recovery but save an already // available token: if (d_token__ != _UNDETERMINED_) pushToken__(d_token__); // next token is the rule's LHS reduce__(s_productionInfo[-action]); // $insert debug if (d_debug__) s_out__ << "errorRecovery() REDUCE by rule " << -action ; // $insert debug if (d_debug__) s_out__ << ", token = " << symbol__(d_token__) << "\n" << dflush__; } else ABORT(); // abort when accepting during // error recovery } catch (...) { if (d_token__ == _EOF_) ABORT(); // saw inappropriate _EOF_ popToken__(); // failing token now skipped } } } catch (ErrorRecovery__) // This is: DEFAULT_RECOVERY_MODE { ABORT(); } // The parsing algorithm: // Initially, state 0 is pushed on the stack, and d_token__ as well as // d_nextToken__ are initialized to _UNDETERMINED_. // // Then, in an eternal loop: // // 1. If a state does not have REQ_TOKEN no token is assigned to // d_token__. If the state has REQ_TOKEN, nextToken() is called to // determine d_nextToken__ and d_token__ is set to // d_nextToken__. nextToken() will not call lex() unless d_nextToken__ is // _UNDETERMINED_. // // 2. lookup() is called: // d_token__ is stored in the final element's d_token field of the // state's SR_ array. // // 3. The current token is looked up in the state's SR_ array // // 4. Depending on the result of the lookup() function the next state is // shifted on the parser's stack, a reduction by some rule is applied, // or the parsing function returns ACCEPT(). When a reduction is // called for, any action that may have been defined for that // reduction is executed. // // 5. An error occurs if d_token__ is not found, and the state has no // default reduction. Error handling was described at the top of this // file. int Parser::parse() try { // $insert debug if (d_debug__) s_out__ << "parse(): Parsing starts" << "\n" << dflush__; push__(0); // initial state clearin(); // clear the tokens. while (true) { // $insert debug if (d_debug__) s_out__ << "==" << "\n" << dflush__; try { if (s_state[d_state__]->d_type & REQ_TOKEN) nextToken(); // obtain next token int action = lookup(false); // lookup d_token__ in d_state__ if (action > 0) // SHIFT: push a new state { push__(action); popToken__(); // token processed } else if (action < 0) // REDUCE: execute and pop. { executeAction(-action); // next token is the rule's LHS reduce__(s_productionInfo[-action]); } else ACCEPT(); } catch (ErrorRecovery__) { errorRecovery(); } } } catch (Return__ retValue) { // $insert debug if (d_debug__) s_out__ << "parse(): returns " << retValue << "\n" << dflush__; return retValue; } bisonc++-4.13.01/parser/handleproductionelement.cc0000644000175000017500000000240612633316117020773 0ustar frankfrank#include "parser.ih" void Parser::handleProductionElement(STYPE__ &last) { // maybe also when currentRule == 0 ? See addProduction if (!d_rules.hasRules()) // may happen if the first rule could not be return; // defined because of token/rulename clash if (!last) // the last PTag was a %prec specification { checkFirstType(); return; } switch (last->tag()) { case Tag__::TERMINAL: d_rules.addElement(last->get()); checkFirstType(); break; case Tag__::SYMBOL: d_rules.addElement(last->get()); checkFirstType(); break; case Tag__::BLOCK: installAction(last->get()); break; default: // can't occur, but used to keep the break; // compiler from generating a warning } if ( d_rules.lastProduction().action().empty() and d_arg.option('N') and d_rules.sType() != "" ) wmsg << "rule `" << &d_rules.lastProduction() << "' lacks an action block assigning a(n) " << d_rules.sType() << " value to $$" << endl; } bisonc++-4.13.01/parser/setuniondecl.cc0000644000175000017500000000067412633316117016560 0ustar frankfrank#include "parser.ih" void Parser::setUnionDecl() { d_options.setUnionDecl(d_scanner.block().str()); d_semType = UNION; // if a %union is used, then the rules // MUST have an associated return type if } // a plain $$ is used. Also, a union must // be available if a $ construction // is used. bisonc++-4.13.01/parser/setpolymorphicdecl.cc0000644000175000017500000000026612633316117017772 0ustar frankfrank#include "parser.ih" void Parser::setPolymorphicDecl() { d_expect = "Polymorphic base class specifications"; d_options.setPolymorphicDecl(); d_semType = POLYMORPHIC; } bisonc++-4.13.01/parser/definetokenname.cc0000644000175000017500000000055012633316117017211 0ustar frankfrank#include "parser.ih" void Parser::defineTokenName(string const &name, bool hasValue) { defineTerminal(name, Symbol::SYMBOLIC_TERMINAL); if (hasValue) { wmsg << "deprecated use of explicit value: `" << name << ' ' << d_scanner.number() << '\'' << endl; d_rules.setLastTerminalValue(d_scanner.number()); } } bisonc++-4.13.01/parser/frame0000644000175000017500000000005312633316117014561 0ustar frankfrank#include "parser.ih" Parser::() const { } bisonc++-4.13.01/parser/nestedblock.cc0000644000175000017500000000163312633316117016355 0ustar frankfrank#include "parser.ih" // Add a hidden rule consisting of one action block. void Parser::nestedBlock(Block &block) { string name = nextHiddenName(); // Since the inner block is a block, simply assume that its return value // matches the type of the rule in which it is nested. NonTerminal *np = NonTerminal::downcast( defineNonTerminal(name, d_rules.sType()) ); d_rules.addElement(np); // add the block as a hidden rule // process the block as a nested block // preceded by nElements() -1 // production elements. substituteBlock(-d_rules.nElements(), block); d_rules.setHiddenAction(block); // define the action in the hidden // terminal's production rule } bisonc++-4.13.01/parser/openrule.cc0000644000175000017500000000152612633316117015712 0ustar frankfrank#include "parser.ih" // I've seen the begin of a rule. If not yet defined, do so // now, and prepare for productions. void Parser::openRule(string const &ruleName) { NonTerminal *nt = requireNonTerminal(ruleName); // rule must start with N if (nt) { // quit if not if ( not d_rules.newRule(nt, d_scanner.filename(), d_scanner.lineNr()) ) { Rules::FileInfo const &fileInfo = d_rules.fileInfo(nt); wmsg << "Extending rule `" << ruleName << "', first defined in `" << fileInfo.first << "' (" << fileInfo.second << ")" << endl; } d_rules.addProduction(d_scanner.lineNr()); } } bisonc++-4.13.01/parser/negativeindex.cc0000644000175000017500000000125412633316117016711 0ustar frankfrank#include "parser.ih" void Parser::negativeIndex(AtDollar const &atd) const { if (not atd.id().empty()) { emsg << "rule " << &d_rules.lastProduction() << ":\n" "\t\t<" << atd.id() << "> cannot be used for negative $-indices (" << atd.text() << ')' << endl; return; } if ( d_negativeDollarIndices || d_semType == SINGLE || d_rules.sType().empty() ) return; wmsg.setLineNr(atd.lineNr()); wmsg << "rule " << &d_rules.lastProduction() << ":\n" "\t\traw STYPE__ is used for negative $-indices (" << atd.text() << ')' << endl; } bisonc++-4.13.01/parser/installaction.cc0000644000175000017500000000073312633316117016724 0ustar frankfrank#include "parser.ih" void Parser::installAction(Block &block) { // process the block for the last // production rule, having a given number // of elements. Returns false if no // explicit return was defined if (!substituteBlock(d_rules.nElements(), block)) checkFirstType(); d_rules.setAction(block); } bisonc++-4.13.01/parser/indextooffset.cc0000644000175000017500000000477212633316117016750 0ustar frankfrank#include "parser.ih" // When a block is read, process its $ and @ symbols. If it's a nested // block, it inherits the outer rule's stype, but it can only sensibly // have negative stack indices up to the number of elements of the current // rule. The stack starts with 0 elements, and its topmost element is // having the highest index value. Then, a stack element related to a rule // element `i' is reached using index tos - n + i, where `tos' is the // number of elements on the stack, `n' the number of elements in the // production rule, and `i' the $-index ($1, $2, $3, etc.). Assuming tos // is the pointer to the current stacktop, and assuming that a rule has // `n' elements so far, indicated as $1, $2, ... $n, followed by the // nested block, the stack, when the nested block's action is called, has // the following contents (behind the $i's the proper index relative to // the nested block number of elements are indicated): // [higher indices in the stack] // // tos -> $n [tos - n + n] = [tos + 1 - 1] // ... // $2 // $1 [tos - n + 1] = [tos + 1 - n] // // [lower indices in the stack] // So, if the action called at this moment is a nested block's action, // then its `n' is 0, so element $n (its grammar rule's predecessor) is // reached as element 0: [tos - 0 - 0], element $n-1 is reached as [tos - // 0 - 1] etc, etc.. However, in order to make the counting more // intuitively, $0 is not used, but instead $-element are, like their // positive counterparts, counted as pure negative values. So, with hidden // numbers, the block-processor will add 1 to negative indices. // With mid-rule actions nElements is negative, counting the action block // as an element. A construction like // rule: // TOK1 TOK2 // { // cout << $1 << ' ' << $2; // } // TOK3 // // is translated using a hidden block into: // // #0001 // { // cout << $-1 << ' ' << $0; // } // TOK3 // // rule: // TOK1 TOK2 // #0001 // TOK3 // // Therefore, $1 becomes $-1, $2 becomes $0. The index can be computed // as idx + nElements + 1 int Parser::indexToOffset(int idx, int nElements) const { if (idx < 0 || nElements < 0) ++idx; return nElements >= 0 ? idx - nElements : idx + nElements; } bisonc++-4.13.01/parser/checkfirsttype.cc0000644000175000017500000000157512633316117017114 0ustar frankfrank#include "parser.ih" // the production has elements, but no action. // Now check if the production's FIRST element's stype is equal // to the rule's stype or whether the rule's stype is empty (default) void Parser::checkFirstType() { string const &stype = d_rules.sType(); string const *firstStype = &d_rules.sType(1); if (d_semType == POLYMORPHIC) { if (stype == s_stype__ && firstStype->empty()) wmsg << "rule `" << &d_rules.lastProduction() << ":\n" "\t\texplicitly tagged or `STYPE__' semantic value " "expected" << endl; } else if (stype.length() && stype != *firstStype) wmsg << "rule `" << d_rules.name() << "' type clash (`" << stype << "' `" << *firstStype << "') on default action" << endl; } bisonc++-4.13.01/parser/useterminal.cc0000644000175000017500000000107112633316117016404 0ustar frankfrank#include "parser.ih" Terminal *Parser::useTerminal() { string const &name = d_scanner.canonicalQuote(); if (Symbol *sp = d_symtab.lookup(name)) { if (sp->isTerminal()) return Terminal::downcast(sp); multiplyDefined(sp); return 0; } Terminal *tp = new Terminal(name, Symbol::CHAR_TERMINAL, d_scanner.number()); d_symtab.insert ( Symtab::value_type ( name, d_rules.insert(tp, d_matched) ) ); return tp; } bisonc++-4.13.01/parser/requirenonterminal.cc0000644000175000017500000000231612633316117020002 0ustar frankfrank#include "parser.ih" NonTerminal *Parser::requireNonTerminal(string const &name) { string stype; Symbol *sp = d_symtab.lookup(name); if (sp) { if (sp->isNonTerminal()) // Only ok if already defined return NonTerminal::downcast(sp); // as non-terminal if (sp->isUndetermined()) // assumed terminal earlier, { // turns out to be nonterm. stype = sp->sType(); d_symtab.erase(d_symtab.find(name)); // now all occurrences of the terminal in existing production rules // must be changed into nonterminals. Also, sp must be removed from rule's // terminal vector d_terminal } else { multiplyDefined(sp); return 0; } } NonTerminal *np = new NonTerminal(name, stype); // If not yet defined, define // it now as a non-terminal d_symtab.insert ( Symtab::value_type ( name, d_rules.insert(np) ) ); if (sp) d_rules.termToNonterm(sp, np); return np; } bisonc++-4.13.01/parser/substituteblock.cc0000644000175000017500000000311012633316117017276 0ustar frankfrank#include "parser.ih" // Stack_offset is the number of values in the current alternative so far, so // it is d_elements.size(). It indicates where to find $0 with respect to the // top of the (?) stack. // If nElements is negative then this is a mid-action block, automatically // resulting in negative dollar indices bool Parser::substituteBlock(int nElements, Block &block) { // Look for special characters. Do this from the end of the // block-text to the beginning, to keep the positions of earlier // special characters unaltered. bool explicitReturn = false; // block return type as yet // unknown for (auto &atd: ranger(block.rbeginAtDollar(), block.rendAtDollar())) { switch (atd.type()) { case AtDollar::DOLLAR: explicitReturn |= handleDollar(block, atd, nElements); break; case AtDollar::AT: handleAtSign(block, atd, nElements); break; } } // save the default $1 value // at the beginning of a mid-rule if (nElements < 0) // action block saveDollar1(block, indexToOffset(1, nElements)); if (not explicitReturn and d_arg.option('N') and d_rules.sType() != "") wmsg << "rule " << &d_rules.lastProduction() << ": action block does not assign a(n) " << d_rules.sType() << " value to $$." << endl; return explicitReturn; } bisonc++-4.13.01/parser/errindextoolarge.cc0000644000175000017500000000072312633316117017434 0ustar frankfrank#include "parser.ih" bool Parser::errIndexTooLarge(AtDollar const &atd, int elements) const { if (atd.returnValue()) return false; int nElements = nComponents(elements); if (atd.nr() <= nElements) return false; emsg << "rule " << &d_rules.lastProduction() << ":\n" "\t\t" << atd.text() << ": index exceeds # components before " "the action block (" << nElements << ")." << endl; return true; } bisonc++-4.13.01/plainwarnings.cc0000644000175000017500000000023012633316117015430 0ustar frankfrank#include namespace Global { void plainWarnings() { FBB::wmsg.setTag("Warning"); FBB::wmsg.noLineNr(); } } bisonc++-4.13.01/production/0000755000175000017500000000000012633316117014440 5ustar frankfrankbisonc++-4.13.01/production/production.h0000644000175000017500000001315312633316117017002 0ustar frankfrank#ifndef _INCLUDED_PRODUCTION_ #define _INCLUDED_PRODUCTION_ #include #include #include #include #include "../block/block.h" #include "../symbol/symbol.h" #include "../terminal/terminal.h" // NOTE: To obtain all productions of a certain Non-Terminal, use // NonTerminal's `productions()' member // A Production is a vector of symbols. Its elements specify the RHS elements // of the producion LHS -> RHS. Several of the vector's public members are // available, see the public: section. class Production: private std::vector { typedef std::vector Inherit; friend std::ostream &operator<<(std::ostream &out, Production const *production); Terminal const *d_precedence; // 0 or a pointer to some terminal // defining this production's // precedence (through %prec) Block d_action; // action associated with this // production. Symbol const *d_nonTerminal; // pointer to the lhs nonterminal of // this production size_t d_nr; // production order number // over all productions, // starts at 1 mutable bool d_used; // true once this production has been // used. size_t d_lineNr; // line in the grammar file where the // production is defined. size_t d_nameIdx; // index in s_filename of the name of // the file in which this production // is defined. static size_t s_nr; // incremented at each new production static bool s_unused; // prevents multiple 'unused // production rules' warnings static Production const *s_startProduction; // a pointer to the start // production rule. static std::vector s_fileName; public: using Inherit::size; using Inherit::begin; using Inherit::end; using Inherit::rbegin; using Inherit::rend; using Inherit::push_back; typedef std::vector Vector; typedef std::vector ConstVector; typedef ConstVector::const_iterator ConstIter; Production(Symbol const *nonTerminal, size_t lineNr); Block const &action() const; Symbol const &operator[](size_t idx) const; Symbol const *rhs(size_t idx) const; // idx-ed symbol in the rhs Symbol const *lhs() const; Terminal const *precedence() const; bool hasAction() const; bool isEmpty() const; size_t nr() const; void used() const; // d_used is mutable void setAction(Block const &block); void setPrecedence(Terminal const *terminal); std::string const &fileName() const; size_t lineNr() const; static Production const *start(); static void insertAction(Production const *prod, std::ostream &out, bool lineDirectives, size_t indent); static void setStart(Production const *production); static void termToNonterm(Production *pPtr, Symbol *terminal, Symbol *nonTerminal); static void unused(Production const *production); static bool notUsed(); static bool isTerminal(); static void storeFilename(std::string const &filename); void setLineNr(size_t lineNr); private: Symbol *vectorIdx(size_t idx) const; std::ostream &standard(std::ostream &out) const; }; inline void Production::setLineNr(size_t lineNr) { d_lineNr = lineNr; } inline std::string const &Production::fileName() const { return s_fileName[d_nameIdx]; } inline size_t Production::lineNr() const { return d_lineNr; } inline bool Production::notUsed() { return s_unused; } inline Symbol const *Production::rhs(size_t idx) const { return vectorIdx(idx); } inline Symbol const *Production::lhs() const { return d_nonTerminal; } inline size_t Production::nr() const { return d_nr; } inline void Production::used() const { d_used = true; } inline Block const &Production::action() const { return d_action; } inline bool Production::hasAction() const { return !d_action.empty(); } inline bool Production::isEmpty() const { return empty() && d_action.empty(); } inline Terminal const *Production::precedence() const { return d_precedence; } inline Symbol const &Production::operator[](size_t idx) const { return *vectorIdx(idx); } inline void Production::setAction(Block const &block) { d_action = block; } inline void Production::setStart(Production const *production) { s_startProduction = production; } inline Production const *Production::start() { return s_startProduction; } inline std::ostream &operator<<(std::ostream &out, Production const *prod) { return prod->standard(out); } inline bool isTerminal(Symbol const *symbol) { return symbol->isTerminal(); } inline void Production::termToNonterm(Production *pPtr, Symbol *terminal, Symbol *nonTerminal) { std::replace(pPtr->begin(), pPtr->end(), terminal, nonTerminal); } #endif bisonc++-4.13.01/production/unused.cc0000644000175000017500000000054112633316117016252 0ustar frankfrank#include "production.ih" void Production::unused(Production const *production) { if (!production->d_used) { if (!s_unused) { Global::plainWarnings(); wmsg << "Unused production rule(s):" << endl; s_unused = true; } wmsg << " " << production << endl; } } bisonc++-4.13.01/production/setprecedence.cc0000644000175000017500000000016512633316117017562 0ustar frankfrank#include "production.ih" void Production::setPrecedence(Terminal const *terminal) { d_precedence = terminal; } bisonc++-4.13.01/production/storeFilename.cc0000644000175000017500000000035312633316117017545 0ustar frankfrank#include "production.ih" void Production::storeFilename(string const &filename) { if ( find(s_fileName.begin(), s_fileName.end(), filename) == s_fileName.end() ) s_fileName.push_back(filename); } bisonc++-4.13.01/production/data.cc0000644000175000017500000000025012633316117015655 0ustar frankfrank#include "production.ih" size_t Production::s_nr; bool Production::s_unused; Production const *Production::s_startProduction; vector Production::s_fileName; bisonc++-4.13.01/production/insertaction.cc0000644000175000017500000000143712633316117017456 0ustar frankfrank#include "production.ih" void Production::insertAction(Production const *prod, std::ostream &out, bool lineDirectives, size_t indent) { if (! prod->hasAction()) return; out << setw(indent) << "" << "case " << prod->nr() << ":\n"; size_t begin = 0; Block const &block = prod->action(); if (lineDirectives) out << "#line " << block.line() << " \"" << block.source() << "\"\n"; while (true) { size_t end = block.find_first_of('\n', begin); out << setw(indent) << "" << block.substr(begin, end - begin) << "\n"; if (end == string::npos) break; begin = end + 1; } out << setw(indent) << "" << "break;\n" "\n"; } bisonc++-4.13.01/production/standard.cc0000644000175000017500000000062012633316117016545 0ustar frankfrank#include "production.ih" std::ostream &Production::standard(std::ostream &out) const { out << d_nr << ": " << lhs(); if (d_precedence != 0) out << " (" << d_precedence << ')'; out << " -> "; if (size() == 0) out << " "; else { for (const_iterator sym = begin(); sym != end(); ++sym) out << ' ' << *sym; } return out; } bisonc++-4.13.01/production/frame0000644000175000017500000000006312633316117015454 0ustar frankfrank#include "production.ih" Production::() const { } bisonc++-4.13.01/production/production1.cc0000644000175000017500000000036512633316117017222 0ustar frankfrank#include "production.ih" Production::Production(Symbol const *nonTerminal, size_t lineNr) : d_precedence(0), d_nonTerminal(nonTerminal), d_nr(++s_nr), d_used(false), d_lineNr(lineNr), d_nameIdx(s_fileName.size() - 1) {} bisonc++-4.13.01/production/vectoridx.cc0000644000175000017500000000022512633316117016755 0ustar frankfrank#include "production.ih" Symbol *Production::vectorIdx(size_t idx) const { return idx >= size() ? 0 : std::vector::operator[](idx); } bisonc++-4.13.01/production/production.ih0000644000175000017500000000024212633316117017146 0ustar frankfrank#include "production.h" #include #include namespace Global { void plainWarnings(); } using namespace std; using namespace FBB; bisonc++-4.13.01/rebis0000755000175000017500000000235712633316117013313 0ustar frankfrank#!/bin/bash # See also: http://www.alchemylab.com/dictionary.htm#sectH: # The Hermaphrodite represents Sulfur and Mercury after their # Conjunction. Rebis (something double in characteristics) is another # designation for this point in the alchemy of transformation. prompt() { echo "$*" read A [ "$A" == "y" ] && return 1 return 0 } echo Press enter to execute commands ending in ... echo prompt " 0. Cleaning up ./self ..." mkdir -p self | exit 1 rm -rf self/* prompt " 1. Copying the grammar to self..." cp -r parser/grammar parser/inc self prompt " 2. Run (in ./self) ../tmp/bin/binary on the grammar" cd self && ../tmp/bin/binary -N -S ../skeletons grammar cd .. prompt " 3. Copy the generated parserbase.h and parse.cc to ./parser" cp ./self/parse.cc ./self/parserbase.h ./parser prompt " 4. Rebuild: rebuilding the program ('build program')" touch ./scanner/a build program || exit 1 echo prompt " 5. Again: run '../tmp/bin/binary' on grammar" cd self && ../tmp/bin/binary -N -S ../skeletons grammar cd .. echo " 6. Diffs should show differences in timestamps only:" for file in parserbase.h parse.cc do prompt " RUN: 'diff self/$file parser' ..." diff self/$file parser echo done bisonc++-4.13.01/required0000644000175000017500000000072712633316117014023 0ustar frankfrankThis file lists non-standard software only. Thus, standard utilities like cp, mv, sed, etc, etc, are not explicitly mentioned. Neither is the g++ compiler explicitly mentioned, but a fairly recent one is assumed. Required software for building Bisonc++ 4.13.00 ----------------------------------------------- libbobcat-dev (>= 4.01.03), To use the provided build-script: icmake (>= 8.00.04) To construct the manual and map-page: yodl (>= 3.06.0) bisonc++-4.13.01/rmreduction/0000755000175000017500000000000012633316117014605 5ustar frankfrankbisonc++-4.13.01/rmreduction/rmreduction.ih0000644000175000017500000000006012633316117017456 0ustar frankfrank#include "rmreduction.h" using namespace std; bisonc++-4.13.01/rmreduction/rmreduction.h0000644000175000017500000000250612633316117017314 0ustar frankfrank#ifndef _INCLUDED_RMREDUCTION_ #define _INCLUDED_RMREDUCTION_ #include #include class Symbol; class RmReduction { public: typedef std::vector Vector; typedef Vector::const_iterator ConstIter; private: size_t d_idx; // idx in a StateItem::Vector of // reduce-production size_t d_next; // next state when shifting Symbol const *d_symbol; // symbol causing the S/R conflict bool d_forced; // forced if not based on precedence or // associativity public: RmReduction() = default; // Only needed for vectors RmReduction(size_t idx, size_t next, Symbol const *symbol, bool forced); size_t idx() const; size_t next() const; Symbol const *symbol() const; static bool isForced(RmReduction const &rmReduction); }; inline size_t RmReduction::idx() const { return d_idx; } inline Symbol const *RmReduction::symbol() const { return d_symbol; } inline size_t RmReduction::next() const { return d_next; } inline bool RmReduction::isForced(RmReduction const &rmReduction) { return rmReduction.d_forced; } #endif bisonc++-4.13.01/rmreduction/frame0000644000175000017500000000005512633316117015622 0ustar frankfrank#include "rmreduction.ih" RmReduction:: { } bisonc++-4.13.01/rmreduction/rmreduction1.cc0000644000175000017500000000034212633316117017527 0ustar frankfrank#include "rmreduction.ih" RmReduction::RmReduction(size_t idx, size_t next, Symbol const *symbol, bool forced) : d_idx(idx), d_next(next), d_symbol(symbol), d_forced(forced) {} bisonc++-4.13.01/rmshift/0000755000175000017500000000000012633316117013726 5ustar frankfrankbisonc++-4.13.01/rmshift/rmshift.ih0000644000175000017500000000005412633316117015723 0ustar frankfrank#include "rmshift.h" using namespace std; bisonc++-4.13.01/rmshift/rmshift.h0000644000175000017500000000136412633316117015557 0ustar frankfrank#ifndef INCLUDED_RMSHIFT_ #define INCLUDED_RMSHIFT_ #include #include class RmShift { public: typedef std::vector Vector; typedef Vector::const_iterator ConstIter; private: size_t d_idx; // idx of the Next vector to remove. bool d_forced; // forced if not based on precedence or // associativity public: RmShift() = default; // only needed for vectors RmShift(size_t idx, bool forced); size_t idx() const; bool forced() const; }; inline size_t RmShift::idx() const { return d_idx; } inline bool RmShift::forced() const { return d_forced; } #endif bisonc++-4.13.01/rmshift/rmshift1.cc0000644000175000017500000000015312633316117015771 0ustar frankfrank#include "rmshift.ih" RmShift::RmShift(size_t idx, bool forced) : d_idx(idx), d_forced(forced) {} bisonc++-4.13.01/rmshift/frame0000644000175000017500000000004512633316117014742 0ustar frankfrank#include "rmshift.ih" RmShift:: { } bisonc++-4.13.01/rrconflict/0000755000175000017500000000000012633316117014417 5ustar frankfrankbisonc++-4.13.01/rrconflict/rrconflict1.cc0000644000175000017500000000031412633316117017152 0ustar frankfrank#include "rrconflict.ih" RRConflict::RRConflict(StateItem::Vector const &stateItem, vector const &reducible) : d_itemVector(stateItem), d_reducible(reducible) { } bisonc++-4.13.01/rrconflict/insert.cc0000644000175000017500000000134112633316117016231 0ustar frankfrank#include "rrconflict.ih" ostream &RRConflict::insert(ostream &out) const { RRData::ConstIter iter = d_rmReduction.begin(); RRData::ConstIter end = d_rmReduction.end(); while ((iter = find_if(iter, end, RRData::isForced)) != end) { StateItem const &reduced = d_itemVector[iter->reduceIdx()]; out << "Solved RR CONFLICT for rules " << d_itemVector[iter->keepIdx()].nr() << " and " << reduced.nr() << ":\n" "\tremoved " << iter->lookaheadSet() << " from the LA set of " << (reduced.lookaheadSetSize() == 0 ? "(removed) " : "") << "rule " << reduced.nr() << '\n'; ++iter; } return out; } bisonc++-4.13.01/rrconflict/rrconflict.ih0000644000175000017500000000022612633316117017106 0ustar frankfrank#include "rrconflict.h" #include #include #include "../rules/rules.h" using namespace std; using namespace FBB; bisonc++-4.13.01/rrconflict/visitreduction.cc0000644000175000017500000000054512633316117020005 0ustar frankfrank#include "rrconflict.ih" void RRConflict::visitReduction(size_t first) { d_firstIdx = d_reducible[first]; d_firstLA = &d_itemVector[d_firstIdx].lookaheadSet(); // this MUST be second < d_reducible.size() !! for (size_t second = first + 1; second < d_reducible.size(); ++second) compareReductions(second); } bisonc++-4.13.01/rrconflict/comparereductions.cc0000644000175000017500000000223712633316117020460 0ustar frankfrank#include "rrconflict.ih" void RRConflict::compareReductions(size_t second) { second = d_reducible[second]; RRData rrData( d_firstLA->intersection( d_itemVector[second].lookaheadSet() ) ); if (rrData.empty()) // no overlap return; StateItem const &firstItem = d_itemVector[d_firstIdx]; StateItem const &secondItem = d_itemVector[second]; switch ( Terminal::comparePrecedence( firstItem.precedence(), secondItem.precedence() ) ) { case Terminal::EQUAL: rrData.setIdx( firstItem.nr() < secondItem.nr() ? RRData::KEEP_FIRST : RRData::KEEP_SECOND, d_firstIdx, second ); s_nConflicts += rrData.size(); break; case Terminal::SMALLER: // first precedence < second prec. rrData.setIdx(d_firstIdx); break; case Terminal::LARGER: // shift precedence > prod. prec. rrData.setIdx(second); break; } d_rmReduction.push_back(rrData); } bisonc++-4.13.01/rrconflict/showconflicts.cc0000644000175000017500000000211312633316117017610 0ustar frankfrank#include "rrconflict.ih" void RRConflict::showConflicts(Rules const &rules) const { RRData::ConstIter iter = d_rmReduction.begin(); RRData::ConstIter end = d_rmReduction.end(); unordered_map> conflict; while ((iter = find_if(iter, end, RRData::isForced)) != end) { conflict[d_itemVector[iter->keepIdx()].nr()].push_back( d_itemVector[iter->reduceIdx()].nr() ); ++iter; } if (conflict.empty()) return; for (auto &rule: conflict) { auto prodPtr = rules.productions()[rule.first - 1]; wmsg << " keeping rule " << rule.first << " (" << prodPtr->fileName() << ", line " << prodPtr->lineNr() << "), dropping\n"; for (size_t idx: rule.second) { prodPtr = rules.productions()[idx - 1]; wmsg << " rule " << idx << " (" << prodPtr->fileName() << ", line " << prodPtr->lineNr() << ")\n"; } } } bisonc++-4.13.01/rrconflict/data.cc0000644000175000017500000000007312633316117015637 0ustar frankfrank#include "rrconflict.ih" size_t RRConflict::s_nConflicts; bisonc++-4.13.01/rrconflict/inspect.cc0000644000175000017500000000023212633316117016370 0ustar frankfrank#include "rrconflict.ih" void RRConflict::inspect() { for (unsigned first = 0; first < d_reducible.size(); ++first) visitReduction(first); } bisonc++-4.13.01/rrconflict/rrconflict.h0000644000175000017500000000332012633316117016733 0ustar frankfrank#ifndef _INCLUDED_RRCONFLICT_ #define _INCLUDED_RRCONFLICT_ #include #include "../stateitem/stateitem.h" #include "../lookaheadset/lookaheadset.h" #include "../rrdata/rrdata.h" class Rules; class RRConflict { friend std::ostream &operator<<(std::ostream &out, RRConflict const &conflict); StateItem::Vector const &d_itemVector; // items involved in the RR // conflict std::vector const &d_reducible; // the numbers of rules that // can be reduced size_t d_firstIdx; // index of the first // reducible rule LookaheadSet const *d_firstLA; // pointer to the LA set of // the first reducible rule RRData::Vector d_rmReduction; // RRData of rules to remove static size_t s_nConflicts; // number of RR conflicts public: RRConflict(StateItem::Vector const &stateItem, std::vector const &reducible); void inspect(); void removeConflicts(StateItem::Vector &itemVector); static size_t nConflicts(); void showConflicts(Rules const &rules) const; private: std::ostream &insert(std::ostream &out) const; void visitReduction(size_t first); void compareReductions(size_t second); }; inline size_t RRConflict::nConflicts() { return s_nConflicts; } inline std::ostream &operator<<(std::ostream &out, RRConflict const &conflict) { return conflict.insert(out); } #endif bisonc++-4.13.01/rrconflict/frame0000644000175000017500000000005312633316117015432 0ustar frankfrank#include "rrconflict.ih" RRConflict:: { } bisonc++-4.13.01/rrconflict/removeconflicts.cc0000644000175000017500000000026412633316117020132 0ustar frankfrank#include "rrconflict.ih" void RRConflict::removeConflicts(StateItem::Vector &itemVector) { for (auto &rm: d_rmReduction) StateItem::removeRRConflict(rm, itemVector); } bisonc++-4.13.01/rrdata/0000755000175000017500000000000012633316117013527 5ustar frankfrankbisonc++-4.13.01/rrdata/setidx.cc0000644000175000017500000000041412633316117015335 0ustar frankfrank#include "rrdata.ih" void RRData::setIdx(Keep keep, size_t first, size_t second) { if (keep == KEEP_FIRST) { d_idx = second; d_kept = first; } else { d_idx = first; d_kept = second; } d_forced = true; } bisonc++-4.13.01/rrdata/rrdata1.cc0000644000175000017500000000016712633316117015400 0ustar frankfrank#include "rrdata.ih" RRData::RRData(LookaheadSet laSet) : d_laSet(laSet), d_forced(false), d_kept(~0) {} bisonc++-4.13.01/rrdata/rrdata.ih0000644000175000017500000000005212633316117015323 0ustar frankfrank#include "rrdata.h" using namespace std; bisonc++-4.13.01/rrdata/rrdata.h0000644000175000017500000000312512633316117015156 0ustar frankfrank#ifndef _INCLUDED_RRDATA_ #define _INCLUDED_RRDATA_ #include #include "../lookaheadset/lookaheadset.h" // Data used when processing R/R conflicts. Used by, e.g., StateItem class RRData { public: typedef std::vector Vector; typedef Vector::const_iterator ConstIter; enum Keep { KEEP_FIRST, KEEP_SECOND, }; private: LookaheadSet d_laSet; // set of LA symbols bool d_forced; // true if one of the two rules is explicitly // kept. size_t d_idx; // index of item with reduced LA set size_t d_kept; // index of item with kept LA set public: RRData(LookaheadSet first); bool empty() const; size_t keepIdx() const; size_t reduceIdx() const; size_t size() const; LookaheadSet const &lookaheadSet() const; void setIdx(Keep keep, size_t first, size_t second); void setIdx(size_t reduce); // non-forced static bool isForced(RRData const &rrData); }; inline LookaheadSet const &RRData::lookaheadSet() const { return d_laSet; } inline bool RRData::empty() const { return d_laSet.empty(); } inline size_t RRData::size() const { return d_laSet.fullSize(); } inline void RRData::setIdx(size_t reduce) { d_idx = reduce; } inline size_t RRData::reduceIdx() const { return d_idx; } inline size_t RRData::keepIdx() const { return d_kept; } inline bool RRData::isForced(RRData const &rrData) { return rrData.d_forced; } #endif bisonc++-4.13.01/rules/0000755000175000017500000000000012633316117013404 5ustar frankfrankbisonc++-4.13.01/rules/showterminals.cc0000644000175000017500000000053412633316117016614 0ustar frankfrank#include "rules.ih" void Rules::showTerminals() const { if (!imsg.isActive()) return; imsg << "\n" "Symbolic Terminal tokens:\n"; Terminal::inserter(&Terminal::valueQuotedName); copy(d_terminal.begin(), d_terminal.end(), ostream_iterator(imsg, "\n")); imsg << endl; } bisonc++-4.13.01/rules/rules.h0000644000175000017500000002416012633316117014712 0ustar frankfrank#ifndef _INCLUDED_RULES_ #define _INCLUDED_RULES_ #include #include #include #include #include #include #include "../block/block.h" #include "../terminal/terminal.h" #include "../nonterminal/nonterminal.h" #include "../production/production.h" #include "../symbol/symbol.h" // A Rule contains the information about a rule. // Its number, and a vector of Alternatives. an Alternative is a vector of // iterators into the symbol table class Rules { public: // For each rule, maintain a record of file/line combinations // indicating where the rule was first seen. This allows me to // generate warnings when a rule is defined as full rule // definitions rather than as alternatives. There's nothing // inherently wrong with that, but it may also be a typo causing // unexpected conflicts typedef std::pair FileInfo; private: typedef std::unordered_map NFileInfoMap; Terminal::Vector d_terminal; // the vector holding information // about defined terminal symbols NonTerminal::Vector d_nonTerminal; // the vector holding information // about defined nonterminals NFileInfoMap d_location; // the map holding information // about initial rule locations NonTerminal *d_currentRule; // Pointer to the N currently // processed. Initially 0 std::string d_startRule; // name of the startrule Production::Vector d_production; // the vector holding information // about all productions // productions hold Symbol // elements, they contain // information about type and // index of their elements in the // (non)terminal vectors Production *d_currentProduction; // currently processed production // rule (pointer to a Production // also in d_production) static size_t s_acceptProductionNr; // index in d_production of the // accept rule static size_t s_nExpectedConflicts; // how many conflicts to expect? static Terminal s_errorTerminal; // predefined 'error' terminal static Terminal s_eofTerminal; // predefined eof terminal static Symbol *s_startSymbol; // derived from the initial N or // from the N defined as the // star tsymbol by // augmentGrammar(). static size_t s_lastLineNr; // last received line nr, used by // setHiddenAction() public: Rules(); static void setExpectedConflicts(size_t value); static Terminal const *eofTerminal(); static Terminal const *errorTerminal(); static size_t acceptProductionNr(); static size_t expectedConflicts(); void clearLocations(); // clear d_location Terminal *insert(Terminal *terminal, std::string const &literal); NonTerminal *insert(NonTerminal *nonTerminal); void addElement(Symbol *symbol); // add the symbol as the next element of the // rule-production that's currently being defined. void addProduction(size_t lineNr); // add a new production to the set of productions of the // rule currently being defined bool newRule(NonTerminal *nonTerminal, std::string const &source, size_t lineNr); // add a new rule. If startrule has not // yet been set, define this rule as the startrule. // return true if a really new rule was added, rather than // expanding a rule defined earlier. void assignNonTerminalNumbers(); void augmentGrammar(Symbol *start); Production const &lastProduction() const; FileInfo const &fileInfo(NonTerminal const *nt) const; // return the FileInfo of the // first definition of rule `nt' std::string const &name() const; // return the name of the // currently defined rule std::string const &sType() const; // return the value type // associated with the // currently defined rule. std::string const &sType(size_t idx) const; // return the value type // associated with element idx of // the currently defined production Symbol const *symbol(size_t idx) const; // return the symbol // associated with element idx of // the currently defined production // Note: symbol idx MUST exist size_t nProductions() const; size_t nElements() const; void determineFirst(); bool hasRules() const; // associate an action with the currently defined rule // production void setAction(Block const &block); void setHiddenAction(Block const &block); void setLastTerminalValue(size_t value); void setLastPrecedence(size_t value); void updatePrecedences(); // try to assign a precedence to // productions that don't yet have a // precedence associated to them void setPrecedence(Terminal const *terminal); // Set the explicit precedence of the currently defined // production to the precedence of the given terminal. void showFirst() const; // show the First-sets void showTerminals() const; // show symbolic terminals and their // values void showRules() const; // show the rule/productions void showUnusedTerminals() const; void showUnusedNonTerminals() const; void showUnusedRules() const; void setStartRule(std::string const &start); std::string const &startRule() const; Production const *startProduction(); static Symbol const *startSymbol() ; std::vector const &nonTerminals() const; std::vector const &terminals() const; std::vector const &productions() const; void setNonTerminalTypes(); void termToNonterm(Symbol *term, Symbol *nonTerm); private: static void updatePrecedence(Production *production, Terminal::Vector const &tv); }; inline Rules::Rules() : d_currentRule(0), d_currentProduction(0) {} inline void Rules::clearLocations() { d_location.clear(); } inline void Rules::setExpectedConflicts(size_t value) { s_nExpectedConflicts = value; } inline Terminal const *Rules::eofTerminal() { return &s_eofTerminal; } inline Terminal const *Rules::errorTerminal() { return &s_errorTerminal; } inline size_t Rules::acceptProductionNr() { return s_acceptProductionNr; } inline size_t Rules::expectedConflicts() { return s_nExpectedConflicts; } inline Production const &Rules::lastProduction() const { return *d_currentProduction; } inline std::string const &Rules::name() const { return d_currentRule->name(); } inline std::string const &Rules::sType() const { return d_currentRule->sType(); } inline size_t Rules::nProductions() const { return d_currentRule->nProductions(); } inline bool Rules::hasRules() const { return d_currentRule; } inline size_t Rules::nElements() const { return d_currentProduction->size(); } inline void Rules::setAction(Block const &block) { d_currentProduction->setAction(block); } inline void Rules::setLastTerminalValue(size_t value) { d_terminal.back()->setValue(value); } inline void Rules::setLastPrecedence(size_t value) { d_terminal.back()->setPrecedence(value); } inline void Rules::setStartRule(std::string const &start) { d_startRule = start; } inline std::string const &Rules::startRule() const { return d_startRule; } inline Production const *Rules::startProduction() { return d_currentProduction; } inline Symbol const *Rules::startSymbol() { return s_startSymbol; } inline std::vector const &Rules::nonTerminals() const { void const *vp = &d_nonTerminal; return *reinterpret_cast const *> (vp); } inline std::vector const &Rules::terminals() const { void const *vp = &d_terminal; return *reinterpret_cast const *>(vp); } inline std::vector const &Rules::productions() const { void const *vp = &d_production; return *reinterpret_cast const *> (vp); } inline Rules::FileInfo const &Rules::fileInfo(NonTerminal const *nt) const { return d_location.find(nt)->second; } inline Symbol const *Rules::symbol(size_t idx) const { return d_currentProduction->rhs(idx - 1); } #endif bisonc++-4.13.01/rules/rules.ih0000644000175000017500000000036112633316117015060 0ustar frankfrank#include "rules.h" #include #include #include #include #include #include namespace Global { void plainWarnings(); } using namespace std; using namespace FBB; bisonc++-4.13.01/rules/setnonterminaltypes.cc0000644000175000017500000000024212633316117020040 0ustar frankfrank#include "rules.ih" void Rules::setNonTerminalTypes() { for_each(d_nonTerminal.begin(), d_nonTerminal.end(), &NonTerminal::setNonTerminal); } bisonc++-4.13.01/rules/insert1.cc0000644000175000017500000000037212633316117015302 0ustar frankfrank#include "rules.ih" Terminal *Rules::insert(Terminal *terminal, std::string const &literal) { d_terminal.push_back(terminal); if (terminal->name() != literal) d_terminal.back()->setLiteral(literal); return d_terminal.back(); } bisonc++-4.13.01/rules/determinefirst.cc0000644000175000017500000000142512633316117016741 0ustar frankfrank#include "rules.ih" void Rules::determineFirst() { size_t lastCount = 0; // counts the number of first-elements. while (true) { // Process all non-terminals (N). Handle all the production rules of // these non-terminals. For an empty production, the N gets . // For non-empty productions: add those element's first symbols, and // stop if an element has no empty production. If, at the end, the // final element has an empty production, add as well NonTerminal::resetCounter(); for_each(d_nonTerminal.begin(), d_nonTerminal.end(), &NonTerminal::setFirst); if (lastCount == NonTerminal::counter()) break; lastCount = NonTerminal::counter(); } } bisonc++-4.13.01/rules/setprecedence.cc0000644000175000017500000000066612633316117016534 0ustar frankfrank#include "rules.ih" void Rules::setPrecedence(Terminal const *terminal) { if (!d_currentProduction->precedence()) { d_currentProduction->setPrecedence(terminal); terminal->used(); } else emsg << "%prec " << terminal << ": precedence already set to " << &Terminal::quotedName << d_currentProduction->precedence() << &Terminal::plainName << endl; } bisonc++-4.13.01/rules/sethiddenaction.cc0000644000175000017500000000154012633316117017060 0ustar frankfrank#include "rules.ih" void Rules::setHiddenAction(Block const &block) { // create (hidden) production // (when shown, 90000 is added to // the line nr to flag a hidden // action production rule Production *pp = new Production(d_nonTerminal.back(), 90000 + s_lastLineNr); d_production.push_back(pp); // put production in production // vector // add production to the hidden // rule d_nonTerminal.back()->addProduction(d_production.back()); d_production.back()->setAction(block); } bisonc++-4.13.01/rules/assignnonterminalnumbers.cc0000644000175000017500000000033512633316117021043 0ustar frankfrank#include "rules.ih" void Rules::assignNonTerminalNumbers() { NonTerminal::setFirstNr(Terminal::maxValue() + 1); for_each(d_nonTerminal.begin(), d_nonTerminal.end(), &NonTerminal::setNr); } bisonc++-4.13.01/rules/showunusedterminals.cc0000644000175000017500000000027512633316117020042 0ustar frankfrank#include "rules.ih" void Rules::showUnusedTerminals() const { Terminal::inserter(&Terminal::valueQuotedName); for_each(d_terminal.begin(), d_terminal.end(), &Terminal::unused); } bisonc++-4.13.01/rules/showunusedrules.cc0000644000175000017500000000035712633316117017177 0ustar frankfrank#include "rules.ih" void Rules::showUnusedRules() const { Terminal::inserter(&Terminal::plainName); for_each(d_production.begin(), d_production.end(), &Production::unused); if (Production::notUsed()) imsg << endl; } bisonc++-4.13.01/rules/data.cc0000644000175000017500000000055412633316117014630 0ustar frankfrank#include "rules.ih" size_t Rules::s_lastLineNr; size_t Rules::s_nExpectedConflicts; size_t Rules::s_acceptProductionNr; Symbol *Rules::s_startSymbol; Terminal Rules::s_errorTerminal("error", "_error_", Symbol::SYMBOLIC_TERMINAL); Terminal Rules::s_eofTerminal("EOF", "_EOF_", Symbol::SYMBOLIC_TERMINAL); bisonc++-4.13.01/rules/insert2.cc0000644000175000017500000000023212633316117015276 0ustar frankfrank#include "rules.ih" NonTerminal *Rules::insert(NonTerminal *nonTerminal) { d_nonTerminal.push_back(nonTerminal); return d_nonTerminal.back(); } bisonc++-4.13.01/rules/showrules.cc0000644000175000017500000000066212633316117015752 0ustar frankfrank#include "rules.ih" void Rules::showRules() const { if (!imsg.isActive()) return; imsg << "\n" "Production Rules\n" "(rule precedences determined from %prec or 1st terminal between " "parentheses):\n"; copy(d_production.begin(), d_production.end(), ostream_iterator(imsg, "\n")); imsg << endl; } bisonc++-4.13.01/rules/stype.cc0000644000175000017500000000043212633316117015056 0ustar frankfrank#include "rules.ih" namespace { string const ret; } string const &Rules::sType(size_t idx) const { size_t size = d_currentProduction->size(); return (idx && idx <= size) ? (*d_currentProduction)[idx - 1].sType() : ret; } bisonc++-4.13.01/rules/frame0000644000175000017500000000005112633316117014415 0ustar frankfrank#include "rules.ih" Rules::() const { } bisonc++-4.13.01/rules/updateprecedence.cc0000644000175000017500000000066312633316117017220 0ustar frankfrank#include "rules.ih" void Rules::updatePrecedence(Production *production, Terminal::Vector const &tv) { if (production->precedence()) // a precedence has already been set return; auto symbolIter = find_if(production->begin(), production->end(), isTerminal); if (symbolIter != production->end()) production->setPrecedence(Terminal::downcast(*symbolIter)); } bisonc++-4.13.01/rules/addproduction.cc0000644000175000017500000000231612633316117016554 0ustar frankfrank#include "rules.ih" void Rules::addProduction(size_t lineNr) { if (d_currentRule == 0) // there may be no return; // rule (cf. Justin // Madru's grammar in // the changelog. In // that case there's // also no production. s_lastLineNr = lineNr; // create production d_currentProduction = new Production(d_currentRule, lineNr); d_production.push_back(d_currentProduction); // put production in // production vector d_currentRule->addProduction(d_currentProduction); // add production to // current rule. // imsg.setLineNr(lineNr); // imsg << " Adding production rule " << // d_currentRule->nProductions() << // " (" << d_production.size() << " productions in total)" << // endl; } bisonc++-4.13.01/rules/termtononterm.cc0000644000175000017500000000040712633316117016631 0ustar frankfrank#include "rules.ih" void Rules::termToNonterm(Symbol *term, Symbol *nonTerm) { d_terminal.erase(find(d_terminal.begin(), d_terminal.end(), term)); for (auto pPtr: d_production) Production::termToNonterm(pPtr, term, nonTerm); delete term; } bisonc++-4.13.01/rules/augmentgrammar.cc0000644000175000017500000000074112633316117016724 0ustar frankfrank#include "rules.ih" // following this call, d_currentRule points to the augmented grammar's // startrule, which derives the startrule and has EOF in its FOLLOW set. void Rules::augmentGrammar(Symbol *start) { string augment = start->name() + "_$"; newRule(insert(new NonTerminal(augment)), "-N.A.-", 0); addProduction(0); s_acceptProductionNr = d_currentProduction->nr(); s_startSymbol = start; addElement(start); d_currentRule->used(); } bisonc++-4.13.01/rules/showfirst.cc0000644000175000017500000000065012633316117015744 0ustar frankfrank#include "rules.ih" void Rules::showFirst() const { if (!Arg::instance().option(0, "construction")) return; imsg << "\n" "FIRST sets:\n"; NonTerminal::inserter(&NonTerminal::nameAndFirstset); Terminal::inserter(&Terminal::plainName); copy(d_nonTerminal.begin(), d_nonTerminal.end(), ostream_iterator(imsg, "\n")); imsg << endl; } bisonc++-4.13.01/rules/addelement.cc0000644000175000017500000000077712633316117016030 0ustar frankfrank#include "rules.ih" void Rules::addElement(Symbol *symbol) { d_currentProduction->push_back(symbol); // If the display of the new element is really requested (it isn't shown in // bisonc++ 2.8.0) then pass yylineno from parser/handleproductionelement.cc // and parser/nestedblock.cc to this function so the line can be // imsg.setLineNr(lineNr); // imsg << " $" << d_currentProduction->size() << ": " << symbol << // endl; } bisonc++-4.13.01/rules/showunusednonterminals.cc0000644000175000017500000000070112633316117020547 0ustar frankfrank#include "rules.ih" void Rules::showUnusedNonTerminals() const { Global::plainWarnings(); for_each(d_nonTerminal.begin(), d_nonTerminal.end(), &NonTerminal::unused); if (NonTerminal::notUsed()) imsg << endl; for_each(d_nonTerminal.begin(), d_nonTerminal.end(), &NonTerminal::undefined); if (NonTerminal::notDefined()) imsg << endl; } bisonc++-4.13.01/rules/updateprecedences.cc0000644000175000017500000000057512633316117017405 0ustar frankfrank#include "rules.ih" // The precedence of a production rule is defined either explicitly (using // %prec) or it is defined as the priority setting of the rule's first // terminal token. If none is found the production rule has a default // priority. void Rules::updatePrecedences() { for (auto production: d_production) updatePrecedence(production, d_terminal); } bisonc++-4.13.01/rules/newrule.cc0000644000175000017500000000147112633316117015377 0ustar frankfrank#include "rules.ih" bool Rules::newRule(NonTerminal *np, string const &source, size_t lineNr) { // If the terminal definition is really requested (it isn't shown in bisonc++ // 2.8.0) then pass yylineno from parser/openrule.cc and // rules/augmentgrammar.cc to this function so the line can be shown. // // imsg << endl; // // imsg.setLineNr(lineNr); // imsg << "Adding production rule for `" << np->name() << "'" << endl; s_lastLineNr = lineNr; Production::storeFilename(source); if (!d_startRule.length()) d_startRule = np->name(); d_currentRule = np; NFileInfoMap::iterator nfIter = d_location.find(np); if (nfIter != d_location.end()) return false; // extending an existing rulename d_location[np] = FileInfo(source, lineNr); return true; } bisonc++-4.13.01/runtmp0000755000175000017500000000032612633316117013526 0ustar frankfrank#!/bin/sh PRE=`dirname $0` if [ $# == 0 ] then echo " Provide arguments to $PRE/tmp/bin/bisonc++. The skeletons in ${PRE}/skeletons are used. " exit 1 fi ${PRE}/tmp/bin/bisonc++ -S ${PRE}/skeletons $* bisonc++-4.13.01/scanner/0000755000175000017500000000000012633316117013703 5ustar frankfrankbisonc++-4.13.01/scanner/returntypespec.cc0000644000175000017500000000052512633316117017310 0ustar frankfrank#include "scanner.ih" void Scanner::returnTypeSpec() { string trimmed = String::trim(matched()); push(trimmed.back()); // return '%' or ';' for renewed scanning trimmed.resize(trimmed.length() - 1); setMatched(String::trim(trimmed)); begin(StartCondition__::INITIAL); leave(Parser::IDENTIFIER); } bisonc++-4.13.01/scanner/scanner1.cc0000644000175000017500000000061312633316117015724 0ustar frankfrank#include "scanner.ih" Scanner::Scanner(std::string const &infile) : ScannerBase(infile, ""), d_include(false), d_matched(matched()), d_inclusionDepth(1) { memset(d_commentChar, 0, 2); setTags(); Arg &arg = Arg::instance(); string value; if (arg.option(&value, "max-inclusion-depth")) d_maxDepth = stoul(value); else d_maxDepth = 10; } bisonc++-4.13.01/scanner/checkzeronumber.cc0000644000175000017500000000023612633316117017401 0ustar frankfrank#include "scanner.ih" void Scanner::checkZeroNumber() { if (d_number == 0) emsg << "Quoted constant " << d_matched << " equals zero" << endl; } bisonc++-4.13.01/scanner/returnquoted.cc0000644000175000017500000000106112633316117016751 0ustar frankfrank#include "scanner.ih" void Scanner::returnQuoted(void (Scanner::*handler)()) { if (d_block) { d_block += d_matched; begin(StartCondition__::block); } else { begin(StartCondition__::INITIAL); (this->*handler)(); // handles quoted // octal constant (octal) or // hex constant (hexadecimal) // why not escape sequence? (escape) leave(Parser::QUOTE); } } bisonc++-4.13.01/scanner/scannerbase.h0000644000175000017500000003021712633316117016343 0ustar frankfrank// Generated by Flexc++ V2.03.00 on Wed, 14 Oct 2015 13:59:57 +0200 #ifndef ScannerBASE_H_INCLUDED #define ScannerBASE_H_INCLUDED #include #include #include #include #include #include // $insert baseIncludes #include #include #include class ScannerBase { // idx: rule, value: tail length (NO_INCREMENTS if no tail) typedef std::vector VectorInt; static size_t const s_unavailable = std::numeric_limits::max(); enum { AT_EOF = -1 }; protected: enum Leave__ {}; enum class ActionType__ { CONTINUE, // transition succeeded, go on ECHO_CH, // echo ch itself (d_matched empty) ECHO_FIRST, // echo d_matched[0], push back the rest MATCH, // matched a rule RETURN, // no further continuation, lex returns 0. }; enum class PostEnum__ { END, // postCode called when lex__() ends POP, // postCode called after switching files RETURN, // postCode called when lex__() returns WIP // postCode called when a non-returning rule // was matched }; public: enum class StartCondition__ { // $insert startCondNames INITIAL, xstring, pstring, pxstring, string, rawstring, comment, quote, block, typespec, }; private: struct FinalData { size_t rule; size_t length; }; struct Final { FinalData std; FinalData bol; }; // class Input encapsulates all input operations. // Its member get() returns the next input character // $insert inputInterface class Input { std::deque d_deque; // pending input chars std::istream *d_in; // ptr for easy streamswitching size_t d_lineNr; // line count public: Input(); // iStream: dynamically allocated Input(std::istream *iStream, size_t lineNr = 1); size_t get(); // the next range void reRead(size_t ch); // push back 'ch' (if < 0x100) // push back str from idx 'fmIdx' void reRead(std::string const &str, size_t fmIdx); size_t lineNr() const { return d_lineNr; } size_t nPending() const { return d_deque.size(); } void setPending(size_t size) { d_deque.erase(d_deque.begin(), d_deque.end() - size); } void close() // force closing the stream { delete d_in; d_in = 0; // switchStreams also closes } private: size_t next(); // obtain the next character }; protected: struct StreamStruct { std::string pushedName; Input pushedInput; }; private: std::vector d_streamStack; std::string d_filename; // name of the currently processed static size_t s_istreamNr; // file. With istreams it receives // the name "", where // # is the sequence number of the // istream (starting at 1) int d_startCondition = 0; int d_lopSC = 0; size_t d_state = 0; int d_nextState; std::shared_ptr d_out; bool d_atBOL = true; // the matched text starts at BOL Final d_final; // only used interactively: std::istream *d_in; // points to the input stream std::shared_ptr d_line; // holds line fm d_in Input d_input; std::string d_matched; // matched characters std::string d_lopMatched; // matched lop-rule characters std::string::iterator d_lopIter; std::string::iterator d_lopTail; std::string::iterator d_lopEnd; size_t d_lopPending; // # pending input chars at lop1__ bool d_return; // return after a rule's action bool d_more = false; // set to true by more() size_t (ScannerBase::*d_get)() = &ScannerBase::getInput; protected: std::istream *d_in__; int d_token__; // returned by lex__ // $insert debugDecl static bool s_debug__; static std::ostringstream s_out__; static std::ostream &dflush__(std::ostream &out); private: int const (*d_dfaBase__)[71]; static int const s_dfa__[][71]; static int const (*s_dfaBase__[])[71]; enum: bool { s_interactive__ = false }; enum: size_t { s_rangeOfEOF__ = 68, s_finIdx__ = 69, s_nRules__ = 92, s_maxSizeofStreamStack__ = 10 }; static size_t const s_ranges__[]; static size_t const s_rf__[][2]; public: ScannerBase(ScannerBase const &other) = delete; ScannerBase &operator=(ScannerBase const &rhs) = delete; bool debug() const; std::string const &filename() const; std::string const &matched() const; size_t length() const; size_t lineNr() const; void setDebug(bool onOff); void switchOstream(std::ostream &out); void switchOstream(std::string const &outfilename); void switchStreams(std::istream &in, std::ostream &out = std::cout); void switchIstream(std::string const &infilename); void switchStreams(std::string const &infilename, std::string const &outfilename); // $insert interactiveDecl protected: ScannerBase(std::istream &in, std::ostream &out); ScannerBase(std::string const &infilename, std::string const &outfilename); StartCondition__ startCondition() const; // current start condition bool popStream(); std::ostream &out(); void begin(StartCondition__ startCondition); void echo() const; void leave(int retValue) const; // `accept(n)' returns all but the first `n' characters of the current // token back to the input stream, where they will be rescanned when the // scanner looks for the next match. // So, it matches n of the characters in the input buffer, and so it accepts // n characters, rescanning the rest. void accept(size_t nChars = 0); // former: less void redo(size_t nChars = 0); // rescan the last nChar // characters, reducing // length() by nChars void more(); void push(size_t ch); // push char to Input void push(std::string const &txt); // same: chars std::vector const &streamStack() const; void pushStream(std::istream &curStream); void pushStream(std::string const &curName); void setFilename(std::string const &name); void setMatched(std::string const &text); static std::string istreamName__(); // members used by lex__(): they end in __ and should not be used // otherwise. ActionType__ actionType__(size_t range); // next action bool return__(); // 'return' from codeblock size_t matched__(size_t ch); // handles a matched rule size_t getRange__(int ch); // convert char to range size_t get__(); // next character size_t state__() const; // current state void continue__(int ch); // handles a transition void echoCh__(size_t ch); // echoes ch, sets d_atBOL void echoFirst__(size_t ch); // handles unknown input void updateFinals__(); // update a state's Final info void noReturn__(); // d_return to false void print__() const; // optionally print token void pushFront__(size_t ch); // return char to Input void reset__(); // prepare for new cycle // next input stream: void switchStream__(std::istream &in, size_t lineNr); void lopf__(size_t tail); // matched fixed size tail void lop1__(int lopSC); // matched ab for a/b void lop2__(); // matches the LOP's b tail void lop3__(); // catch-all while matching b void lop4__(); // matches the LOP's a head private: size_t getInput(); size_t getLOP(); void p_pushStream(std::string const &name, std::istream *streamPtr); void setMatchedSize(size_t length); bool knownFinalState(); template static ReturnType constexpr as(ArgType value); static bool constexpr available(size_t value); static StartCondition__ constexpr SC(int sc); static int constexpr SC(StartCondition__ sc); }; template inline ReturnType constexpr ScannerBase::as(ArgType value) { return static_cast(value); } inline bool ScannerBase::knownFinalState() { return (d_atBOL && available(d_final.bol.rule)) || available(d_final.std.rule); } inline bool constexpr ScannerBase::available(size_t value) { return value != std::numeric_limits::max(); } inline ScannerBase::StartCondition__ constexpr ScannerBase::SC(int sc) { return as(sc); } inline int constexpr ScannerBase::SC(StartCondition__ sc) { return as(sc); } inline std::ostream &ScannerBase::out() { return *d_out; } inline void ScannerBase::push(size_t ch) { d_input.reRead(ch); } inline void ScannerBase::push(std::string const &str) { d_input.reRead(str, 0); } inline void ScannerBase::setFilename(std::string const &name) { d_filename = name; } inline void ScannerBase::setMatched(std::string const &text) { d_matched = text; } inline std::string const &ScannerBase::matched() const { return d_matched; } inline ScannerBase::StartCondition__ ScannerBase::startCondition() const { return SC(d_startCondition); } inline std::string const &ScannerBase::filename() const { return d_filename; } inline void ScannerBase::echo() const { *d_out << d_matched; } inline size_t ScannerBase::length() const { return d_matched.size(); } inline void ScannerBase::leave(int retValue) const { throw as(retValue); } inline size_t ScannerBase::lineNr() const { return d_input.lineNr(); } inline void ScannerBase::more() { d_more = true; } inline void ScannerBase::begin(StartCondition__ startCondition) { // $insert debug if (s_debug__) s_out__ << "Switching to StartCondition__ # " << as(startCondition) << "\n" << dflush__; // d_state is reset to 0 by reset__() d_dfaBase__ = s_dfaBase__[d_startCondition = SC(startCondition)]; } inline size_t ScannerBase::state__() const { return d_state; } inline size_t ScannerBase::get__() { return (this->*d_get)(); } inline size_t ScannerBase::getInput() { return d_input.get(); } inline bool ScannerBase::return__() { return d_return; } inline void ScannerBase::noReturn__() { d_return = false; } #endif // ScannerBASE_H_INCLUDED bisonc++-4.13.01/scanner/handlerawstring.cc0000644000175000017500000000043212633316117017405 0ustar frankfrank#include "scanner.ih" void Scanner::rawString() { d_rawString = matched(); d_rawString.erase(0, 1); // rm the R d_rawString.front() = ')'; // end sentinel is )IDENTIFIER" d_rawString.back() = '"'; more(); begin(StartCondition__::rawstring); } bisonc++-4.13.01/scanner/lexer0000644000175000017500000002612712633316117014755 0ustar frankfrank%filenames scanner %class-name = "Scanner" %debug // %print-tokens %x xstring pstring pxstring string rawstring comment quote block typespec OCTAL [0-7] OCT3 {OCTAL}{3} HEX [[:xdigit:]] HEX2 {HEX}{2} ID1 [[:alpha:]_] ID2 [[:alnum:]_] IDENT {ID1}{ID2}* NR [[:digit:]]+ %% { "{" { // open or count a nested a block d_block.open(lineNr(), filename()); begin(StartCondition__::block); } // The whitespace-eating RegExes (REs) will normally simply consume the // WS. However, if d_retWS is unequal 0 then WS is returned. This is // sometimes needed (e.g., inside code blocks to be able to return the ws // as part of the code-block). Comment is part of the WS returning REs [ \t]+ { if (d_block) d_block += " "; } [\n]+ { setLineNrs(); if (d_block) d_block += "\n"; } "//".* // ignore eoln comment in source blocks // If comment is entered from `block' either a blank or a newline will be // added to the block as soon as the matching end-comment is seen, and // the scanner will return to its block-miniscanner state "/*" { d_commentChar[0] = ' '; begin(StartCondition__::comment); } } // Blocks start at { and end at their matching } char. They may contain // comment and whitespace, but whitespace is reduced to single blanks or // newlines. All STRING and QUOTE constants are kept as-is, and are // registered as skip-ranges for $-checks { R\"{IDENT}?\( rawString(); "}" { if (d_block.close()) // close a block { begin(StartCondition__::INITIAL); return Parser::BLOCK; } } "\"" { begin(StartCondition__::string); // d_block.beginSkip(); more(); } "'" { begin(StartCondition__::quote); // d_block.beginSkip(); more(); } "$$" | "@@" d_block.dollar(lineNr(), d_matched, false); "$$." d_block.dollar(lineNr(), d_matched, true); @{NR} d_block.atIndex(lineNr(), d_matched); \$-?{NR} d_block.dollarIndex(lineNr(), d_matched, false); \$-?{NR}\. d_block.dollarIndex(lineNr(), d_matched, true); "$<>$" | "$<"{IDENT}">$" d_block.IDdollar(lineNr(), d_matched); "$<>"-?{NR} | "$<"{IDENT}">"-?{NR} d_block.IDindex(lineNr(), d_matched); . d_block(d_matched); } %baseclass-header[ \t]+ { begin(StartCondition__::pxstring); return Parser::BASECLASS_HEADER; } %baseclass-preinclude[ \t]+ { begin(StartCondition__::pxstring); return Parser::BASECLASS_PREINCLUDE; } %class-header[ \t]+ { begin(StartCondition__::pxstring); return Parser::CLASS_HEADER; } %class-name return Parser::CLASS_NAME; %debug return Parser::DEBUGFLAG; %error-verbose return Parser::ERROR_VERBOSE; %expect return Parser::EXPECT; %filenames[ \t]+ { begin(StartCondition__::pxstring); return Parser::FILENAMES; } "%flex" return Parser::FLEX; %implementation-header[ \t]+ { begin(StartCondition__::pxstring); return Parser::IMPLEMENTATION_HEADER; } %include[ \t]+ { begin(StartCondition__::pxstring); d_include = true; } %left return Parser::LEFT; %locationstruct return Parser::LOCATIONSTRUCT; %lsp-needed return Parser::LSP_NEEDED; %ltype[ \t]+ { begin(StartCondition__::xstring); return Parser::LTYPE; } %namespace return Parser::NAMESPACE; %negative-dollar-indices return Parser::NEG_DOLLAR; %no-lines return Parser::NOLINES; %nonassoc return Parser::NONASSOC; %parsefun-source[ \t]+ { begin(StartCondition__::pxstring); return Parser::PARSEFUN_SOURCE; } %polymorphic return Parser::POLYMORPHIC; %prec return Parser::PREC; %print-tokens return Parser::PRINT_TOKENS; %required-tokens return Parser::REQUIRED; %right return Parser::RIGHT; %scanner[ \t]+ { begin(StartCondition__::pxstring); return Parser::SCANNER; } %scanner-class-name[ \t]+ { begin(StartCondition__::pxstring); return Parser::SCANNER_CLASS_NAME; } %scanner-token-function[ \t]+ { begin(StartCondition__::pxstring); return Parser::SCANNER_TOKEN_FUNCTION; } %scanner-matched-text-function[ \t]+ { begin(StartCondition__::pxstring); return Parser::SCANNER_MATCHED_TEXT_FUNCTION; } %start return Parser::START; %stype[ \t]+ { begin(StartCondition__::xstring); return Parser::STYPE; } %target-directory[ \t]+ { begin(StartCondition__::pxstring); return Parser::TARGET_DIRECTORY; } %token return Parser::TOKEN; %type return Parser::TYPE; %union return Parser::UNION; %weak-tags return Parser::WEAK_TAGS; "%%" return Parser::TWO_PERCENTS; "'" { begin(StartCondition__::quote); more(); } "\"" { begin(StartCondition__::string); more(); } {IDENT} return Parser::IDENTIFIER; [[:digit:]]+ { d_number = stoul(d_matched); return Parser::NUMBER; } . return d_matched[0]; // pxstring is activated after a directive has been sensed. // it extracts a string, pstring or any sequence of non-blank characters, { "\"" { more(); begin(StartCondition__::string); } "<" { more(); begin(StartCondition__::pstring); } . { accept(0); begin(StartCondition__::xstring); } \n return eoln(); } // string may be entered from block and pxstring // strings are all series (including escaped chars, like \") surrounded by // double quotes: { "\"" { if (handleXstring(0)) return Parser::STRING; } "\\". | . more(); \n return eoln(); } // a pstring is a string surrounded by < and > { ">" { if (handleXstring(0)) return Parser::STRING; } "\\". | . more(); \n return eoln(); } // xstring returns the next string delimited by either blanks, tabs, // newlines or C/C++ comment. { [[:space:]] { if (handleXstring(1)) return Parser::STRING; } "//" | "/*" { if (handleXstring(2)) return Parser::STRING; } . more(); } { \){IDENT}?\" checkEndOfRawString(); .|\n more(); } { . \n { setLineNrs(); d_commentChar[0] = '\n'; } "*/" { if (!d_block) begin(StartCondition__::INITIAL); else { d_block += d_commentChar; begin(StartCondition__::block); } } } // quote may be entered from INITIAL and block. // quoted constants start with a quote. They may be octal or hex numbers, // escaped chars, or quoted constants { "\\"{OCT3}"'" returnQuoted(&Scanner::octal); "\\x"{HEX2}"'" returnQuoted(&Scanner::hexadecimal); "\\"[abfnrtv]"'" { if (d_block(d_matched)) begin(StartCondition__::block); else { begin(StartCondition__::INITIAL); escape(); // quoted escape char return Parser::QUOTE; } } "\\"."'" returnQuoted(&Scanner::matched2); ."'" returnQuoted(&Scanner::matched1); [^']+"'" returnQuoted(&Scanner::multiCharQuote); } // a typespec holds all chars after a ':' until a ';' or '%' (which are // pushed back). It is used as a type specification in // parser/inc/directives. Escape characters are interpreted { \\. | [^;%] more(); [;%] returnTypeSpec(); // back to INITIAL, returns IDENTIFIER } bisonc++-4.13.01/scanner/scanner.ih0000644000175000017500000000112612633316117015656 0ustar frankfrank/* Declare here what's only used in the Scanner class and let Scanner's sources include "scanner.ih" */ #include "scanner.h" #include #include #include #include #include #include #include #include "../parser/parserbase.h" #include "../options/options.h" using namespace std; using namespace FBB; inline void Scanner::matched1() { d_number = d_matched[1]; } inline void Scanner::matched2() { d_number = d_matched[2]; } bisonc++-4.13.01/scanner/eoln.cc0000644000175000017500000000017012633316117015145 0ustar frankfrank#include "scanner.ih" int Scanner::eoln() { begin(StartCondition__::INITIAL); setLineNrs(); return '\n'; } bisonc++-4.13.01/scanner/checkendofrawstring.cc0000644000175000017500000000027512633316117020250 0ustar frankfrank#include "scanner.ih" void Scanner::checkEndOfRawString() { if (matched().rfind(d_rawString) == length() - d_rawString.length()) begin(StartCondition__::block); more(); } bisonc++-4.13.01/scanner/settags.cc0000644000175000017500000000036112633316117015664 0ustar frankfrank#include "scanner.ih" void Scanner::setTags() const { emsg.setTag(filename() + ": error"); fmsg.setTag(filename() + ": fatal"); imsg.setTag(filename() + " (info)"); wmsg.setTag(filename() + ": warning"); setLineNrs(); } bisonc++-4.13.01/scanner/driver/0000755000175000017500000000000012633316117015176 5ustar frankfrankbisonc++-4.13.01/scanner/driver/build0000755000175000017500000003750312633316117016233 0ustar frankfrank#!/usr/bin/icmake -qt/tmp/driver // script generated by the C++ icmake script version 2.30 /* Configurable defines: CLASSES: string of directory-names under which sources of classes are found. E.g., CLASSES = "class1 class2" All class-names must be stored in one string. If classes are removed from the CLASSES definition or if the names in the CLASSES definition are reordered, the compilation should start again from scratch. */ string CLASSES; void setClasses() { // ADD ADDITIONAL DIRECTORIES CONTAINING SOURCES OF CLASSES HERE // Use the construction `CLASSES += "classname1 classname2";' etc. CLASSES += " "; } /* Default values for the following variables are found in $IM/default/defines.im BISON_FLAGS: This directive is only used when a grammar is generated using bison++. It defines the set of flags that are given to bison++ when bison++ generates the parser. By default the following flags are specified: -v -l The -d and -o flags are always provided (not configurable) BUILD_LIBRARY: define this if you want to create a library for the object modules. Undefined by default: so NO LIBRARY IS BUILT. This links ALL object files to a program, which is a faster process than linking to a library. But it can bloat the executable: all o-modules, rather than those that are really used, become part of the program's code. BUILD_PROGRAM: define if a program is to be built. If not defined, library maintenance is assumed (by default it is defined to the name of the program to be created). COMPILER: The compiler to use. COPT: C-options used by COMPILER LOPT: Define this (default: to "`wx-config --lib`") if a wxWindows program is constructed. In that case, you probably also want to define the COPT option `wx-config --cxxflags`, using, e.g., the following definition: #define COPT "-Wall `wx-config --cxxflags`" ECHO_REQUEST: ON (default) if command echoing is wanted, otherwise: set to OFF GDB: define if gdb-symbolic debug information is wanted (not defined by default) GRAMMAR_LINES: When this directive is defined, #line directives will be generated at the first C++ compound statement in each individual grammar specification file. Undefine if no such #line directives are required. LIBS: Extra libraries used for linking LIBPATH: Extra library-searchpaths used for linking QT: Define this (default: to "qt") if the unthreaded QT library is used. Define as "qt-mt" if the threaded QT library is used. If set, header files are grepped for the occurrence of the string '^[[:space:]]*Q_OBJECT[[:space:]]*$'. If found, moc -o moc.cc .h is called if the moc-file doesn't exist or is older than the .h file. Also, if defined the proper QT library is linked, assuming that the library is found in the ld-search path (E.g., see the environment variable $LIBRARY_PATH). Note that namespaces are NOT part of the build-script: they are only listed below for convenience. When they must be altered, the defaults must be changed in $IM/default/defines.im RELINK: Defined by default, causing a program to be relinked every time the script is called. Do not define it if relinking should only occur if a source is compiled. No effect for library maintenance. Current values: */ //#define BUILD_LIBRARY #define BUILD_PROGRAM "driver" #define COMPILER "g++" #define COPT "-Wall" //#define LOPT "`wx-config --libs`" #define ECHO_REQUEST 1 //#define GDB "-g" #define LIBS "bisonc++" #define LIBPATH "../.." // local namespace is: FBB // using-declarations generated for: std:FBB // qt-mt can be used to select the threaded QT library //#define QT "qt" // NO CONFIGURABLE PARTS BELOW THIS LINE /* V A R S . I M */ string // contain options for cwd, // current WD libs, // extra libs, e.g., "-lrss -licce" libpath, // extra lib-paths, eg, "-L../rss" copt, // Compiler options lopt, // Linker options libxxx, // full library-path ofiles, // wildcards for o-files sources, // sources to be used current, // contains name of current dir. programname; // the name of the program to create int nClasses, // number of classes/subdirectories program; // 1: program is built list classes; // list of classes/directories /* parser.im */ void parser() { #ifdef GRAMBUILD chdir("parser/gramspec"); #ifdef GRAMMAR_LINES system("./grambuild lines"); #else system("./grambuild"); #endif chdir(".."); if ( exists("grammar") && "grammar" younger "parse.cc" ) // new parser needed #ifdef BISON_FLAGS exec("bisonc++", BISON_FLAGS, "grammar"); #else exec("bisonc++", "grammar"); #endif chdir(".."); #endif } /* S C A N N E R . I M */ void scanner() { string interactive; #ifdef INTERACTIVE interactive = "-I"; #endif #ifdef GRAMBUILD chdir("scanner"); if ( // new lexer needed exists("lexer") && ( "lexer" younger "yylex.cc" || ( exists("../parser/parser.h") && "../parser/parser.h" younger "yylex.cc" ) ) ) exec("flex", interactive, "lexer"); chdir(".."); #endif } /* I N I T I A L . I M */ void initialize() { echo(ECHO_REQUEST); sources = "*.cc"; ofiles = "o/*.o"; // std set of o-files copt = COPT; #ifdef GDB copt += " " + GDB; #endif #ifdef BUILD_PROGRAM program = 1; programname = BUILD_PROGRAM; #else program = 0; #endif; cwd = chdir("."); #ifdef GRAMBUILD if (exists("parser")) // subdir parser exists { CLASSES += "parser "; parser(); } if (exists("scanner")) // subdir scanner exists { CLASSES += "scanner "; scanner(); } #endif setClasses(); // remaining classes classes = strtok(CLASSES, " "); // list of classes nClasses = sizeof(classes); } /* M O C . I M */ void moc(string class) { string hfile; string mocfile; int ret; hfile = class + ".h"; mocfile = "moc" + class + ".cc"; if ( hfile younger mocfile // no mocfile or younger h file && // and Q_OBJECT found in .h file !system(P_NOCHECK, "grep '^[[:space:]]*Q_OBJECT[;[:space:]]*$' " + hfile) ) // then call moc. system("moc -o " + mocfile + " " + hfile); } /* O B J F I L E S . I M */ list objfiles(list files) { string file, objfile; int i; for (i = 0; i < sizeof(files); i++) { file = element(i, files); // determine element of the list objfile = "./o/" + change_ext(file, "o"); // make obj-filename if (objfile younger file) // objfile is younger { files -= (list)file; // remove the file from the list i--; // reduce i to test the next } } return (files); } /* A L T E R E D . I M */ list altered(list files, string target) { int i; string file; for (i = 0; i < sizeof(files); i++) // try all elements of the list { file = element(i, files); // use element i of the list if (file older target) // a file is older than the target { files -= (list)file; // remove the file from the list i--; // reduce i to inspect the next } // file of the list } return (files); // return the new list } /* F I L E L I S T . I M */ list file_list(string type, string library) { list files; files = makelist(type); // make all files of certain type #ifdef BUILD_LIBRARY files = altered(files, library); // keep all files newer than lib. #endif files = objfiles(files); // remove if younger .obj exist return (files); } /* L I N K . I M */ void link(string library) { printf("\n"); exec(COMPILER, "-o", programname, #ifdef BUILD_LIBRARY "-l" + library, #else ofiles, #endif libs, #ifdef QT "-l" + QT, #endif "-L.", libpath, lopt #ifdef LOPT , LOPT #endif #ifndef GDB , "-s" #endif ); printf("ok: ", programname, "\n"); } /* P R E F I X C L . I M */ void prefix_class(string class_id) { list o_files; string o_file; int i; chdir("o"); o_files = makelist("*.o"); for (i = 0; o_file = element(i, o_files); i++) exec("mv", o_file, class_id + o_file); chdir(".."); } /* R M C L A S S P . I M */ #ifdef BUILD_LIBRARY string rm_class_id(string class_id, string ofile) { string ret; int index, n; n = strlen(ofile); for (index = strlen(class_id); index < n; index++) ret += element(index, ofile); return ret; } #endif void rm_class_prefix(string class_id) { #ifdef BUILD_LIBRARY list o_files; string o_file; int i; chdir("o"); o_files = makelist("*.o"); for (i = 0; o_file = element(i, o_files); i++) exec("mv", o_file, rm_class_id(class_id, o_file)); chdir(".."); #endif } /* C C O M P I L E . I M */ void c_compile(list cfiles) { string nextfile; int i; if (!exists("o")) system("mkdir o"); if (sizeof(cfiles)) // files to compile ? { printf("\ncompiling: ", current, "\n\n"); // compile all files separately for (i = 0; nextfile = element(i, cfiles); i++) exec(COMPILER, "-c -o o/" + change_ext(nextfile, "o"), copt, nextfile); printf("\n"); } printf("ok: ", current, "\n"); } /* U P D A T E L I . I M */ void updatelib(string library) { #ifdef BUILD_LIBRARY list arlist, objlist; string to, from; objlist = makelist("o/*.o"); if (!sizeof(objlist)) return; printf("\n"); exec("ar", "rvs", library, "o/*.o"); exec("rm", "o/*.o"); printf("\n"); #endif } /* S T D C P P . I M */ void std_cpp(string library) { list cfiles; cfiles = file_list(sources, library); // make list of all cpp-files c_compile(cfiles); // compile cpp-files } /* C P P M A K E . C CPP files are processed by stdmake. Arguments of CPPMAKE: cpp_make( string mainfile, : name of the main .cpp file, or "" for library maintenance string library, : name of the local library to use/create (without lib prefix or .a/.so suffix (E.g., use `main' for `libmain.a') ) Both mainfile and library MUST be in the current directory */ void cpp_make(string mainfile, string library) { int index; string class; if (nClasses) ofiles += " */o/*.o"; // set ofiles for no LIBRARY use // make library name #ifdef BUILD_LIBRARY libxxx = chdir(".") + "lib" + library + ".a"; #endif // first process all classes for (index = 0; index < nClasses; index++) { class = element(index, classes); // next class to process chdir(class); // change to directory current = "subdir " + class; #ifdef QT moc(class); // see if we should call moc #endif std_cpp(libxxx); // compile all files chdir(cwd); // go back to parent dir } current = "auxiliary " + sources + " files"; std_cpp(libxxx); // compile all files in current dir #ifdef BUILD_LIBRARY // prefix class-number for .o files for (index = 0; index < nClasses; index++) { current = element(index, classes); // determine class name chdir( current); // chdir to a class directory. prefix_class((string)index); updatelib(libxxx); chdir(cwd); // go back to parent dir } current = ""; // no class anymore updatelib(libxxx); // update lib in current dir #endif if (mainfile != "") // mainfile -> do link { link(library); printf ( "\nProgram construction completed.\n" "\n" ); } } /* S E T L I B S . I M */ void setlibs() { #ifdef LIBS int n, index; list cut; cut = strtok(LIBS, " "); // cut op libraries n = sizeof(cut); for (index = 0; index < n; index++) libs += " -l" + element(index, cut); cut = strtok(LIBPATH, " "); // cut up the paths n = sizeof(cut); for (index = 0; index < n; index++) libpath += " -L" + element(index, cut); #endif } void main() { initialize(); setlibs(); #ifdef BUILD_PROGRAM cpp_make ( "driver.cc", // program source "driver" // static program library ); #else cpp_make ( "", "driver" // static- or so-library ); #endif } bisonc++-4.13.01/scanner/driver/driver.h0000644000175000017500000000032312633316117016640 0ustar frankfrank#ifndef _INCLUDED_DRIVER_H_ #define _INCLUDED_DRIVER_H_ #include #include #include #include "../scanner.h" namespace FBB {} using namespace std; using namespace FBB; #endif bisonc++-4.13.01/scanner/driver/grammar0000644000175000017500000000116712633316117016554 0ustar frankfrank%union { int i; size_t/*unsigned*/ u; std::string *s; }; %type NUMBER number %class-name Parser %filenames parser %parsefun-source parse.cc %debug %token NUMBER %% startrule: expressions 'q' { cout << "Done\n"; ACCEPT(); } ; expressions: expressions expression | expression ; expression: number '+' number '=' { cout << $1 << " + " << $3 << " = " << $1 + $3 << endl; } ; number: NUMBER { $$ = atoi(d_scanner.YYText()); cout << "Saw " << d_scanner.YYText() << endl; } ; bisonc++-4.13.01/scanner/driver/driver.cc0000644000175000017500000000066212633316117017004 0ustar frankfrank/* driver.cc */ #include "driver.h" int main(int argc, char **argv) try { Scanner scanner(argv[1]); while (true) { Scanner::Token t = scanner.lex(); if (t == Scanner::ENDFILE) break; cout << "Token " << t << ": " << scanner.text() << endl; } } catch (exception const &err) { cout << err.what() << " (" << errno << ")" << endl; return 1; } bisonc++-4.13.01/scanner/multicharquote.cc0000644000175000017500000000023012633316117017253 0ustar frankfrank#include "scanner.ih" void Scanner::multiCharQuote() { emsg << "multiple characters in quoted constant " << d_matched << endl; d_number = 0; } bisonc++-4.13.01/scanner/setlinenrs.cc0000644000175000017500000000026412633316117016402 0ustar frankfrank#include "scanner.ih" void Scanner::setLineNrs() const { emsg.setLineNr(lineNr()); fmsg.setLineNr(lineNr()); imsg.setLineNr(lineNr()); wmsg.setLineNr(lineNr()); } bisonc++-4.13.01/scanner/handlexstring.cc0000644000175000017500000000137712633316117017074 0ustar frankfrank#include "scanner.ih" bool Scanner::handleXstring(size_t nRedo) { redo(nRedo); setLineNrs(); if (d_block) { begin(StartCondition__::block); d_block += d_matched; return false; } begin(StartCondition__::INITIAL); if (not d_include) return true; d_include = false; string filename = string("\"<").find(d_matched[0]) == 0 ? Options::undelimit(d_matched) : d_matched; if (++d_inclusionDepth > d_maxDepth) fmsg << "maximum inclusion depth (" << d_inclusionDepth << ", " << d_maxDepth << ") exceeded" << endl; pushStream(filename); setTags(); return false; } bisonc++-4.13.01/scanner/escape.cc0000644000175000017500000000156312633316117015457 0ustar frankfrank#include "scanner.ih" namespace { pair escapeChars[] = { pair('a', '\a'), pair('b', '\b'), pair('f', '\f'), pair('n', '\n'), pair('r', '\r'), pair('t', '\t'), pair('v', '\v'), }; static size_t const n = sizeof(escapeChars) / sizeof(pair); class Find { size_t d_key; public: Find(size_t key) : d_key(key) {} bool operator()(pair const &element) const { return element.first == d_key; } }; } void Scanner::escape() { d_number = find_if(escapeChars, escapeChars + n, Find(d_matched[2]))->second; } bisonc++-4.13.01/scanner/scanner.h0000644000175000017500000000613212633316117015507 0ustar frankfrank// Generated by Flexc++ V0.09.50 on Wed, 08 Feb 2012 11:05:26 +0100 #ifndef Scanner_H_INCLUDED_ #define Scanner_H_INCLUDED_ // $insert baseclass_h #include "scannerbase.h" #include #include #include #include "../block/block.h" namespace FBB { class Mstream; } // $insert classHead class Scanner: public ScannerBase { size_t d_number; // only valid after tokens NUMBER and // after escape(), octal() and // hexadecimal(). Illegal (long) // character constants (> 1 char) have bit // 8 set. bool d_include; // set to true/false by lexer-actions char d_commentChar[2]; // set to ' ' in `lexer' when C // comment without \n is matched, // otherwise set to \n. See // `lexer' for details Block d_block; // action block retrieved fm the input std::string d_rawString; // Raw-string sentinel std::string d_canonicalQuote; // canonical quoted ident. std::string const &d_matched; size_t d_maxDepth; // max. file inclusion depth size_t d_inclusionDepth; // actual inclusion depth public: Scanner(std::string const &infile); // $insert lexFunctionDecl int lex(); Block &block(); std::string const &canonicalQuote(); void clearBlock(); size_t number() const; bool hasBlock() const; void beginTypeSpec(); private: void print(); int lex__(); int executeAction__(size_t ruleNr); void preCode(); // re-implement this function for code that must // be exec'ed before the patternmatching starts void postCode(PostEnum__ type); bool handleXstring(size_t nRedo); // performs pushStream int eoln(); void returnTypeSpec(); void returnQuoted(void (Scanner::*handler)()); // handle quoted // constants void escape(); void checkZeroNumber(); void octal(); void hexadecimal(); void matched2(); void matched1(); void multiCharQuote(); void rawString(); void checkEndOfRawString(); void setTags() const; void setLineNrs() const; bool popStream(); }; // $insert inlineLexFunction inline int Scanner::lex() { return lex__(); } inline void Scanner::preCode() { } inline void Scanner::postCode(PostEnum__ type) { } inline void Scanner::print() { print__(); } inline Block &Scanner::block() { return d_block; } inline void Scanner::clearBlock() { d_block.clear(); } inline size_t Scanner::number() const { return d_number; } inline bool Scanner::hasBlock() const { return not d_block.empty(); } inline void Scanner::beginTypeSpec() { begin(StartCondition__::typespec); } #endif // Scanner_H_INCLUDED_ bisonc++-4.13.01/scanner/frame0000644000175000017500000000005512633316117014720 0ustar frankfrank#include "scanner.ih" Scanner::() const { } bisonc++-4.13.01/scanner/hexadecimal.cc0000644000175000017500000000023112633316117016452 0ustar frankfrank#include "scanner.ih" void Scanner::hexadecimal() { istringstream istr(d_matched.substr(3)); istr >> hex >> d_number; checkZeroNumber(); } bisonc++-4.13.01/scanner/popstream.cc0000644000175000017500000000023312633316117016222 0ustar frankfrank#include "scanner.ih" bool Scanner::popStream() { bool ret = ScannerBase::popStream(); d_inclusionDepth -= ret; setTags(); return ret; } bisonc++-4.13.01/scanner/lex.cc0000644000175000017500000043511412633316117015012 0ustar frankfrank// Generated by Flexc++ V2.03.00 on Wed, 14 Oct 2015 13:59:57 +0200 #include #include #include #include // $insert class_ih #include "scanner.ih" // s_ranges__: use (unsigned) characters as index to obtain // that character's range-number. // The range for EOF is defined in a constant in the // class header file size_t const ScannerBase::s_ranges__[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 6, 7, 8, 9,10,11,12,13,14,15,16,16,17,18,19,20,20, 20,20,20,20,20,20,21,21,22,23,24,25,26,27,28,29,29,29,29,29,29,30,30,30,30, 30,30,30,30,30,30,30,31,32,32,32,32,32,32,32,32,33,34,35,35,36,37,38,39,40, 41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65, 66,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67, 67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67, 67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67, 67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67, 67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67, 67,67,67,67,67,67, }; // s_dfa__ contains the rows of *all* DFAs ordered by start state. The // enum class StartCondition__ is defined in the baseclass header // StartCondition__::INITIAL is always 0. Each entry defines the row to // transit to if the column's character range was sensed. Row numbers are // relative to the used DFA, and d_dfaBase__ is set to the first row of // the subset to use. The row's final two values are respectively the // rule that may be matched at this state, and the rule's FINAL flag. If // the final value equals FINAL (= 1) then, if there's no continuation, // the rule is matched. If the BOL flag (8) is also set (so FINAL + BOL (= // 9) is set) then the rule only matches when d_atBOL is also true. int const ScannerBase::s_dfa__[][71] = { // INITIAL {-1, 1, 2, 3, 3, 1, 3, 4, 3, 3, 5, 3, 6, 3, 3, 3, 3, 3, 3, 7, 8, 8, 3, 3, 3, 3, 3, 3, 3, 9, 9, 9, 9, 3, 3, 3, 9, 3, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,10, 3, 3, 3,-1, -1, -1}, // 0 {-1, 1,-1,-1,-1, 1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 1, -1}, // 1 {-1,-1, 2,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 2, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 61, -1}, // 3 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 58, -1}, // 4 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,11,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,12, 13,14,15,16,-1,-1,17,-1,-1,18,-1,19,-1,20,-1,21,22,23,24,-1, 25,-1,-1,-1,-1,-1,-1,-1,-1, 61, -1}, // 5 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 57, -1}, // 6 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,26,-1,-1,-1,27, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 61, -1}, // 7 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 8, 8,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 60, -1}, // 8 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 9, 9,-1,-1,-1,-1,-1,-1,-1, 9, 9, 9, 9,-1,-1,-1, 9,-1, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,-1,-1,-1,-1,-1, 59, -1}, // 9 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 0, -1}, // 10 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 56, -1}, // 11 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,28,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 12 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,29,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 13 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,30,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 14 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,31,-1,-1,-1,-1, -1,32,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 15 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,33,-1,-1,34,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 16 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,35,36,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 17 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,37,-1,-1,-1,-1,-1,-1,-1,-1,-1,38,-1,-1,-1,39,40,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 18 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,41,-1, -1,-1,42,-1,-1,-1,-1,-1,-1,-1,-1,-1,43,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 19 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,44,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,45,-1,-1,46,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 20 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,47,-1,-1,-1,48,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 21 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 49,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,50,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 22 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,51,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,52,-1,-1,-1,-1,-1,-1,-1, -1,-1,53,-1,-1,-1,-1,-1,-1, -1, -1}, // 23 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,54,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 24 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,55,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 25 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 4, -1}, // 26 {-1,27,-1,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27, 27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27, 27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27, 27,27,27,27,27,27,27,27,-1, 3, -1}, // 27 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,56,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 28 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,57,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 29 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,58, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 30 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,59,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 31 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,60,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 32 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,61,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 33 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,62,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 34 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,63,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 35 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 64,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 36 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,65,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 37 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 66,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 38 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,67,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 39 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,68,-1,-1,-1,-1,-1,-1, -1, -1}, // 40 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,69,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 41 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,70,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 42 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,71,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,72,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 43 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,73,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 44 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,74,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 45 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,75,-1,-1,-1,76,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 46 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,77,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 47 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,78,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 48 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,79,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 49 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,80,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,81,-1,-1,-1,-1,-1,-1, -1, -1}, // 50 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,82,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 51 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,83,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 52 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,84,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 53 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,85,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 54 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,86,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 55 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,87,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 56 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,88,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 57 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,89,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 58 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,90,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 59 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,91,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 60 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,92,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 61 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,93,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 62 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,94,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 63 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,95,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 64 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,96,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 65 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,97,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 66 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,98,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 67 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,99,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 68 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,100,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 69 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,101,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 70 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,102,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 71 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,103,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 72 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,104,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 73 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,105,-1,-1,-1,-1,-1,-1, -1, -1}, // 74 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 106,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 75 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,107,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 76 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,108,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 77 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,109,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 78 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,110,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 79 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,111,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 80 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,112,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 81 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,113,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 82 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,114,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 83 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,115,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 84 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,116,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 85 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,117,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 86 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 118,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 87 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,119,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 88 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,120,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 89 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,121,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 90 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 122,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 91 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,123,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 92 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 28, -1}, // 93 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,124,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 94 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,125,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 95 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 31, -1}, // 96 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,126,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 97 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,127,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 98 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,128,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 99 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,129,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 100 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,130,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 101 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,131,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 102 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,132,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 103 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,133,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 104 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,134,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 105 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 41, -1}, // 106 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,135,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 107 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,136,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 108 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,137,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 109 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,138,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 110 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,139,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 111 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,140,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 112 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,141,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 113 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,142,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 114 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 53, -1}, // 115 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,143,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 116 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,144,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 117 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,145,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 118 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,146,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 119 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 24, -1}, // 120 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,147,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 121 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,148,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 122 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,149,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 123 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,150,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 124 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,151,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 125 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,152,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 126 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,153,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 127 {-1,154,-1,-1,-1,154,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 128 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,155,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 129 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,156,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 130 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,157,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 131 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,158,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 132 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,159,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 133 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,160,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 134 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,161,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 135 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,162,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 136 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 44, -1}, // 137 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,163,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 138 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 49, -1}, // 139 {-1,164,-1,-1,-1,164,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 140 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,165,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 141 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 52, -1}, // 142 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 54, -1}, // 143 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,166,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 144 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,167,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 145 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,168,-1,-1,-1,-1,-1,169,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 146 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,170, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 147 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 26, -1}, // 148 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,171,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 149 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,172,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 150 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,173,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 151 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,174,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 152 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,175,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 153 {-1,154,-1,-1,-1,154,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 34, -1}, // 154 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,176,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 155 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,177, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 156 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,178,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 157 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,179,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 158 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,180,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 159 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,181,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 160 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,182,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 161 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,183,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 162 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,184,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 163 {-1,164,-1,-1,-1,164,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 50, -1}, // 164 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,185,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 165 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,186,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 166 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,187,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 167 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,188,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 168 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,189,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 169 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,190,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 170 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,191,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 171 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,192,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 172 {-1,193,-1,-1,-1,193,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 173 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,194,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 174 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,195,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 175 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 196,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 176 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,197,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 177 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,198,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 178 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 199,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 179 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,200,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 180 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,201,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 181 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,202,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 182 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,203,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 183 {-1,204,-1,-1,-1,204,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,205,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 184 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,206,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 185 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,207,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 186 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,208,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 187 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,209,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 188 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,210,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 189 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,211,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 190 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,212,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 191 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,213,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 192 {-1,193,-1,-1,-1,193,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 30, -1}, // 193 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,214,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 194 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,215,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 195 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,216,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 196 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,217,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 197 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 37, -1}, // 198 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 38, -1}, // 199 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,218,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 200 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,219,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 201 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,220,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 202 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,221,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 203 {-1,204,-1,-1,-1,204,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 45, -1}, // 204 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 222,-1,-1,-1,-1,-1,-1,-1,-1,-1,223,-1,-1,-1,-1,-1,-1,224,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 205 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,225,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 206 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,226,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 207 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,227,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 208 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,228,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 209 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,229,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 210 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,230, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 211 {-1,231,-1,-1,-1,231,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 212 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,232,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 213 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,233,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 214 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,234,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 215 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 35, -1}, // 216 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,235,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 217 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,236,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 218 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,237,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 219 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,238,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 220 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,239,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 221 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,240,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 222 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,241,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 223 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,242,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 224 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,243,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 225 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 55, -1}, // 226 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,244,-1,-1,-1,-1,-1,-1,-1,245,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 227 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,246,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 228 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 23, -1}, // 229 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,247,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 230 {-1,231,-1,-1,-1,231,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 27, -1}, // 231 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,248,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 232 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,249,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 233 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 33, -1}, // 234 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,250,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 235 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,251,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 236 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 252,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 237 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,253,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 238 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,254,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 239 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,255,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 240 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,256,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 241 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,257,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 242 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,258,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 243 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,259,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 244 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,260,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 245 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,261,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 246 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,262,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 247 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,263,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 248 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,264,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 249 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,265,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 250 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,266,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 251 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 40, -1}, // 252 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,267,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 253 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,268,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 254 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,269,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 255 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 270,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 256 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,271,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 257 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 272,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 258 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,273,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 259 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,274,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 260 {-1,275,-1,-1,-1,275,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 261 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,276,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 262 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,277,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 263 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 278,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 264 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,279,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 265 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,280,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 266 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 42, -1}, // 267 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,281,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 268 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,282,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 269 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,283,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 270 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,284,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 271 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,285,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 272 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,286,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 273 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,287,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 274 {-1,275,-1,-1,-1,275,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 22, -1}, // 275 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 25, -1}, // 276 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,288,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 277 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,289,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 278 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,290,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 279 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 291,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 280 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,292,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 281 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,293,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 282 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,294,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 283 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,295,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 284 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,296,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 285 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,297,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 286 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,298,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 287 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,299,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 288 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 32, -1}, // 289 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,300,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 290 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,301,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 291 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,302,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 292 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,303,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 293 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,304,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 294 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,305,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 295 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,306,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 296 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,307,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 297 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 308,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 298 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,309,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 299 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,310,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 300 {-1,311,-1,-1,-1,311,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 301 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 43, -1}, // 302 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,312,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 303 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,313,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 304 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,314,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 305 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,315,-1,-1,-1,-1,-1,-1, -1, -1}, // 306 {-1,316,-1,-1,-1,316,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 307 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,317,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 308 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,318,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 309 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,319,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 310 {-1,311,-1,-1,-1,311,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 39, -1}, // 311 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,320,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 312 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,321,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 313 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,322,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 314 {-1,323,-1,-1,-1,323,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 315 {-1,316,-1,-1,-1,316,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 20, -1}, // 316 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,324,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 317 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,325,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 318 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,326,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 319 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,327,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 320 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,328,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 321 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 329,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 322 {-1,323,-1,-1,-1,323,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 51, -1}, // 323 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,330,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 324 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,331,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 325 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,332,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 326 {-1,333,-1,-1,-1,333,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 327 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,334,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 328 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,335,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 329 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,336,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 330 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,337,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 331 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,338,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 332 {-1,333,-1,-1,-1,333,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 46, -1}, // 333 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,339,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 334 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,340,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 335 {-1,341,-1,-1,-1,341,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 336 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,342,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 337 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 343,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 338 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,344,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 339 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,345,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 340 {-1,341,-1,-1,-1,341,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 21, -1}, // 341 {-1,346,-1,-1,-1,346,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 342 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,347,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 343 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,348,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 344 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,349,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 345 {-1,346,-1,-1,-1,346,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 29, -1}, // 346 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,350,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 347 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,351,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 348 {-1,352,-1,-1,-1,352,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 349 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 36, -1}, // 350 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,353,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 351 {-1,352,-1,-1,-1,352,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 47, -1}, // 352 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 354,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 353 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,355,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 354 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,356,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 355 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,357,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 356 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,358,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 357 {-1,359,-1,-1,-1,359,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 358 {-1,359,-1,-1,-1,359,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 48, -1}, // 359 // xstring {-1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 0 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 74, -1}, // 1 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 77, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 4,-1,-1,-1, 5, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 77, -1}, // 3 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 76, -1}, // 4 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 75, -1}, // 5 // pstring {-1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,-1, -1, -1}, // 0 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 72, -1}, // 1 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 73, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 70, -1}, // 3 {-1, 5,-1, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,-1, 72, -1}, // 4 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 71, -1}, // 5 // pxstring {-1, 1, 2, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,-1, -1, -1}, // 0 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 64, -1}, // 1 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 65, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 62, -1}, // 3 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 63, -1}, // 4 // string {-1, 1, 2, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,-1, -1, -1}, // 0 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 68, -1}, // 1 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 69, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 66, -1}, // 3 {-1, 5,-1, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,-1, 68, -1}, // 4 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 67, -1}, // 5 // rawstring {-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,-1, -1, -1}, // 0 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 79, -1}, // 1 {-1,-1,-1,-1,-1,-1,-1, 3,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 4, 4, 4, 4,-1,-1,-1, 4,-1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,-1,-1,-1,-1,-1, 79, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 78, -1}, // 3 {-1,-1,-1,-1,-1,-1,-1, 3,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 4, 4,-1,-1,-1,-1,-1,-1,-1, 4, 4, 4, 4,-1,-1,-1, 4,-1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,-1,-1,-1,-1,-1, -1, -1}, // 4 // comment {-1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,-1, -1, -1}, // 0 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 80, -1}, // 1 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 81, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 4, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 80, -1}, // 3 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 82, -1}, // 4 // quote {-1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,-1, -1, -1}, // 0 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 1 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 5,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 3 {-1, 7, 2, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 7, 7, 7, 7, 7, 7, 7, 9, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,10,10, 7, 7, 7,10, 7, 7, 7, 7, 7, 7, 7,10, 7, 7, 7,10, 7,10, 7,10, 7,11, 7, 7, 7, 7, 7, 7,-1, -1, -1}, // 4 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 87, -1}, // 5 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 88, -1}, // 6 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,12, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 7 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,12,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 87, -1}, // 8 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,12, 2, 2, 2, 2, 2, 2, 2, 13, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 9 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,14, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 10 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,12, 2, 2, 2, 2, 2, 2, 2, 15,15, 2, 2, 2, 2, 2, 2, 2,15, 2, 2, 2, 2, 2, 2, 2, 2,15,15, 15,15,15,15, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 11 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 86, -1}, // 12 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 16, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 13 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 85, -1}, // 14 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 17,17, 2, 2, 2, 2, 2, 2, 2,17, 2, 2, 2, 2, 2, 2, 2, 2,17,17, 17,17,17,17, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 15 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,18, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 16 {-1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,19, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,-1, -1, -1}, // 17 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 83, -1}, // 18 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 84, -1}, // 19 // block {-1, 1, 2, 3, 3, 1, 3, 4, 3, 5, 3, 3, 6, 3, 3, 3, 3, 3, 3, 7, 3, 3, 3, 3, 3, 3, 3, 3, 8, 3, 3, 9, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,10, 3,11, 3,-1, -1, -1}, // 0 {-1, 1,-1,-1,-1, 1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 1, -1}, // 1 {-1,-1, 2,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 2, -1}, // 2 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 19, -1}, // 3 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 7, -1}, // 4 {-1,-1,-1,-1,-1,-1,-1,-1,-1,12,-1,-1,-1,-1,-1,-1,-1,13,-1,-1, 14,14,-1,-1,15,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 19, -1}, // 5 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 8, -1}, // 6 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,16,-1,-1,-1,17, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 19, -1}, // 7 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 18,18,-1,-1,-1,-1,-1,-1,19,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 19, -1}, // 8 {-1,-1,-1,-1,-1,-1,-1,20,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 19, -1}, // 9 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 0, -1}, // 10 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 6, -1}, // 11 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,21,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 9, -1}, // 12 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 14,14,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 13 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,22,-1, 14,14,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 13, -1}, // 14 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,23,-1,-1,24,24,24,24,-1,-1,-1,24,-1,24,24, 24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24, 24,24,24,24,-1,-1,-1,-1,-1, -1, -1}, // 15 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 4, -1}, // 16 {-1,17,-1,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17, 17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17, 17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17, 17,17,17,17,17,17,17,17,-1, 3, -1}, // 17 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 18,18,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 12, -1}, // 18 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 10, -1}, // 19 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,25,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,26,26,26,26,-1,-1,-1,26,-1,26,26, 26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26, 26,26,26,26,-1,-1,-1,-1,-1, -1, -1}, // 20 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 11, -1}, // 21 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 14, -1}, // 22 {-1,-1,-1,-1,-1,-1,-1,-1,-1,27,-1,-1,-1,-1,-1,-1,-1,28,-1,-1, 29,29,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 23 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 24,24,-1,-1,-1,-1,30,-1,-1,24,24,24,24,-1,-1,-1,24,-1,24,24, 24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24, 24,24,24,24,-1,-1,-1,-1,-1, -1, -1}, // 24 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 5, -1}, // 25 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,25,-1,-1,-1,-1,-1,-1, 26,26,-1,-1,-1,-1,-1,-1,-1,26,26,26,26,-1,-1,-1,26,-1,26,26, 26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26, 26,26,26,26,-1,-1,-1,-1,-1, -1, -1}, // 26 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 15, -1}, // 27 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 29,29,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 28 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 29,29,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 17, -1}, // 29 {-1,-1,-1,-1,-1,-1,-1,-1,-1,31,-1,-1,-1,-1,-1,-1,-1,32,-1,-1, 33,33,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 30 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 16, -1}, // 31 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 33,33,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, -1, -1}, // 32 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 33,33,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 18, -1}, // 33 // typespec {-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,-1, -1, -1}, // 0 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 90, -1}, // 1 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 91, -1}, // 2 {-1, 4,-1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,-1, 90, -1}, // 3 {-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1, 89, -1}, // 4 }; int const (*ScannerBase::s_dfaBase__[])[71] = { s_dfa__ + 0, s_dfa__ + 360, s_dfa__ + 366, s_dfa__ + 372, s_dfa__ + 377, s_dfa__ + 383, s_dfa__ + 388, s_dfa__ + 393, s_dfa__ + 413, s_dfa__ + 447, }; size_t ScannerBase::s_istreamNr = 0; // $insert inputImplementation ScannerBase::Input::Input() : d_in(0), d_lineNr(1) {} ScannerBase::Input::Input(std::istream *iStream, size_t lineNr) : d_in(iStream), d_lineNr(lineNr) {} size_t ScannerBase::Input::get() { switch (size_t ch = next()) // get the next input char { case '\n': ++d_lineNr; // FALLING THROUGH default: if (s_debug__) { s_out__ << "Input::get() returns "; if (isprint(ch)) s_out__ << '`' << static_cast(ch) << '\''; else s_out__ << "(int)" << static_cast(ch); s_out__ << '\n' << dflush__; } return ch; } } size_t ScannerBase::Input::next() { size_t ch; if (d_deque.empty()) // deque empty: next char fm d_in { if (d_in == 0) return AT_EOF; ch = d_in->get(); return *d_in ? ch : static_cast(AT_EOF); } ch = d_deque.front(); d_deque.pop_front(); return ch; } void ScannerBase::Input::reRead(size_t ch) { if (ch < 0x100) { if (s_debug__) s_out__ << "Input::reRead(" << ch << ")\n" << dflush__; if (ch == '\n') --d_lineNr; d_deque.push_front(ch); } } void ScannerBase::Input::reRead(std::string const &str, size_t fm) { for (size_t idx = str.size(); idx-- > fm; ) reRead(str[idx]); } ScannerBase::ScannerBase(std::istream &in, std::ostream &out) : d_filename("-"), d_out(new std::ostream(out.rdbuf())), // $insert interactiveInit d_in(0), d_input(new std::istream(in.rdbuf())), d_dfaBase__(s_dfa__) {} void ScannerBase::switchStream__(std::istream &in, size_t lineNr) { d_input.close(); d_input = Input(new std::istream(in.rdbuf()), lineNr); } ScannerBase::ScannerBase(std::string const &infilename, std::string const &outfilename) : d_filename(infilename), d_out(outfilename == "-" ? new std::ostream(std::cout.rdbuf()) : outfilename == "" ? new std::ostream(std::cerr.rdbuf()) : new std::ofstream(outfilename)), d_input(new std::ifstream(infilename)), d_dfaBase__(s_dfa__) {} void ScannerBase::switchStreams(std::istream &in, std::ostream &out) { switchStream__(in, 1); switchOstream(out); } void ScannerBase::switchOstream(std::ostream &out) { *d_out << std::flush; d_out.reset(new std::ostream(out.rdbuf())); } // $insert debugFunctions bool ScannerBase::s_debug__ = true; std::ostringstream ScannerBase::s_out__; void ScannerBase::setDebug(bool onOff) { s_debug__ = onOff; } bool ScannerBase::debug() const { return s_debug__; } std::ostream &ScannerBase::dflush__(std::ostream &out) { std::ostringstream &s_out__ = dynamic_cast(out); std::cout << " " << s_out__.str() << std::flush; s_out__.clear(); s_out__.str(""); return out; } void ScannerBase::redo(size_t nChars) { size_t from = nChars >= length() ? 0 : length() - nChars; d_input.reRead(d_matched, from); d_matched.resize(from); } void ScannerBase::switchOstream(std::string const &outfilename) { *d_out << std::flush; d_out.reset( outfilename == "-" ? new std::ostream(std::cout.rdbuf()) : outfilename == "" ? new std::ostream(std::cerr.rdbuf()) : new std::ofstream(outfilename)); } void ScannerBase::switchIstream(std::string const &infilename) { d_input.close(); d_filename = infilename; d_input = Input(new std::ifstream(infilename)); d_atBOL = true; } void ScannerBase::switchStreams(std::string const &infilename, std::string const &outfilename) { switchOstream(outfilename); switchIstream(infilename); } void ScannerBase::pushStream(std::istream &istr) { std::istream *streamPtr = new std::istream(istr.rdbuf()); p_pushStream("(istream)", streamPtr); } void ScannerBase::pushStream(std::string const &name) { std::istream *streamPtr = new std::ifstream(name); if (!*streamPtr) { delete streamPtr; throw std::runtime_error("Cannot read " + name); } p_pushStream(name, streamPtr); } void ScannerBase::p_pushStream(std::string const &name, std::istream *streamPtr) { if (d_streamStack.size() == s_maxSizeofStreamStack__) { delete streamPtr; throw std::length_error("Max stream stack size exceeded"); } d_streamStack.push_back(StreamStruct{d_filename, d_input}); d_filename = name; d_input = Input(streamPtr); d_atBOL = true; } bool ScannerBase::popStream() { d_input.close(); if (d_streamStack.empty()) return false; StreamStruct &top = d_streamStack.back(); d_input = top.pushedInput; d_filename = top.pushedName; d_streamStack.pop_back(); return true; } // See the manual's section `Run-time operations' section for an explanation // of this member. ScannerBase::ActionType__ ScannerBase::actionType__(size_t range) { d_nextState = d_dfaBase__[d_state][range]; if (d_nextState != -1) // transition is possible return ActionType__::CONTINUE; if (knownFinalState()) // FINAL state reached return ActionType__::MATCH; if (d_matched.size()) return ActionType__::ECHO_FIRST; // no match, echo the 1st char return range != s_rangeOfEOF__ ? ActionType__::ECHO_CH : ActionType__::RETURN; } void ScannerBase::accept(size_t nChars) // old name: less { if (nChars < d_matched.size()) { d_input.reRead(d_matched, nChars); d_matched.resize(nChars); } } void ScannerBase::setMatchedSize(size_t length) { d_input.reRead(d_matched, length); // reread the tail section d_matched.resize(length); // return what's left } // At this point a rule has been matched. The next character is not part of // the matched rule and is sent back to the input. The final match length // is determined, the index of the matched rule is determined, and then // d_atBOL is updated. Finally the rule's index is returned. // The numbers behind the finalPtr assignments are explained in the // manual's `Run-time operations' section. size_t ScannerBase::matched__(size_t ch) { // $insert debug if (s_debug__) s_out__ << "MATCH" << "\n" << dflush__; d_input.reRead(ch); FinalData *finalPtr; if (not d_atBOL) // not at BOL finalPtr = &d_final.std; // then use the std rule (3, 4) // at BOL else if (not available(d_final.std.rule)) // only a BOL rule avail. finalPtr = &d_final.bol; // use the BOL rule (6) else if (not available(d_final.bol.rule)) // only a std rule is avail. finalPtr = &d_final.std; // use the std rule (7) else if ( // Both are available (8) d_final.bol.length != // check lengths of matched texts d_final.std.length // unequal lengths, use the rule ) // having the longer match length finalPtr = d_final.bol.length > d_final.std.length ? &d_final.bol : &d_final.std; else // lengths are equal: use 1st rule finalPtr = d_final.bol.rule < d_final.std.rule ? &d_final.bol : &d_final.std; setMatchedSize(finalPtr->length); d_atBOL = d_matched.back() == '\n'; // $insert debug if (s_debug__) s_out__ << "match buffer contains `" << d_matched << "'" << "\n" << dflush__; return finalPtr->rule; } size_t ScannerBase::getRange__(int ch) // using int to prevent casts { return ch == AT_EOF ? as(s_rangeOfEOF__) : s_ranges__[ch]; } // At this point d_nextState contains the next state and continuation is // possible. The just read char. is appended to d_match void ScannerBase::continue__(int ch) { // $insert debug if (s_debug__) s_out__ << "CONTINUE, NEXT STATE: " << d_nextState << "\n" << dflush__; d_state = d_nextState; if (ch != AT_EOF) d_matched += ch; } void ScannerBase::echoCh__(size_t ch) { // $insert debug if (s_debug__) s_out__ << "ECHO_CH" << "\n" << dflush__; *d_out << as(ch); d_atBOL = ch == '\n'; } // At this point there is no continuation. The last character is // pushed back into the input stream as well as all but the first char. in // the buffer. The first char. in the buffer is echoed to stderr. // If there isn't any 1st char yet then the current char doesn't fit any // rules and that char is then echoed void ScannerBase::echoFirst__(size_t ch) { // $insert debug if (s_debug__) s_out__ << "ECHO_FIRST" << "\n" << dflush__; d_input.reRead(ch); d_input.reRead(d_matched, 1); echoCh__(d_matched[0]); } // Update the rules associated with the current state, do this separately // for BOL and std rules. // If a rule was set, update the rule index and the current d_matched // length. void ScannerBase::updateFinals__() { size_t len = d_matched.size(); int const *rf = d_dfaBase__[d_state] + s_finIdx__; if (rf[0] != -1) // update to the latest std rule { // $insert debug if (s_debug__) s_out__ << "latest std rule: " << rf[0] << ", len = " << len << "\n" << dflush__; d_final.std = FinalData { as(rf[0]), len }; } if (rf[1] != -1) // update to the latest bol rule { // $insert debug if (s_debug__) s_out__ << "latest BOL rule: " << rf[0] << ", len = " << len << "\n" << dflush__; d_final.bol = FinalData { as(rf[1]), len }; } } void ScannerBase::reset__() { d_final = Final{ FinalData{s_unavailable, 0}, FinalData {s_unavailable, 0} }; d_state = 0; d_return = true; if (!d_more) d_matched.clear(); d_more = false; } int Scanner::executeAction__(size_t ruleIdx) try { // $insert debug if (s_debug__) s_out__ << "Executing actions of rule " << ruleIdx << "\n" << dflush__; switch (ruleIdx) { // $insert actions case 0: { #line 22 "lexer" { d_block.open(lineNr(), filename()); begin(StartCondition__::block); } } break; case 1: { #line 32 "lexer" { if (d_block) d_block += " "; } } break; case 2: { #line 37 "lexer" { setLineNrs(); if (d_block) d_block += "\n"; } } break; case 4: { #line 48 "lexer" { d_commentChar[0] = ' '; begin(StartCondition__::comment); } } break; case 5: { #line 59 "lexer" rawString(); } break; case 6: { #line 61 "lexer" { if (d_block.close()) { begin(StartCondition__::INITIAL); return Parser::BLOCK; } } } break; case 7: { #line 69 "lexer" { begin(StartCondition__::string); more(); } } break; case 8: { #line 75 "lexer" { begin(StartCondition__::quote); more(); } } break; case 9: case 10: { #line 82 "lexer" d_block.dollar(lineNr(), d_matched, false); } break; case 11: { #line 84 "lexer" d_block.dollar(lineNr(), d_matched, true); } break; case 12: { #line 86 "lexer" d_block.atIndex(lineNr(), d_matched); } break; case 13: { #line 88 "lexer" d_block.dollarIndex(lineNr(), d_matched, false); } break; case 14: { #line 90 "lexer" d_block.dollarIndex(lineNr(), d_matched, true); } break; case 15: case 16: { #line 93 "lexer" d_block.IDdollar(lineNr(), d_matched); } break; case 17: case 18: { #line 96 "lexer" d_block.IDindex(lineNr(), d_matched); } break; case 19: { #line 98 "lexer" d_block(d_matched); } break; case 20: { #line 101 "lexer" { begin(StartCondition__::pxstring); return Parser::BASECLASS_HEADER; } } break; case 21: { #line 105 "lexer" { begin(StartCondition__::pxstring); return Parser::BASECLASS_PREINCLUDE; } } break; case 22: { #line 109 "lexer" { begin(StartCondition__::pxstring); return Parser::CLASS_HEADER; } } break; case 23: { #line 113 "lexer" return Parser::CLASS_NAME; } break; case 24: { #line 114 "lexer" return Parser::DEBUGFLAG; } break; case 25: { #line 115 "lexer" return Parser::ERROR_VERBOSE; } break; case 26: { #line 116 "lexer" return Parser::EXPECT; } break; case 27: { #line 117 "lexer" { begin(StartCondition__::pxstring); return Parser::FILENAMES; } } break; case 28: { #line 121 "lexer" return Parser::FLEX; } break; case 29: { #line 122 "lexer" { begin(StartCondition__::pxstring); return Parser::IMPLEMENTATION_HEADER; } } break; case 30: { #line 126 "lexer" { begin(StartCondition__::pxstring); d_include = true; } } break; case 31: { #line 130 "lexer" return Parser::LEFT; } break; case 32: { #line 131 "lexer" return Parser::LOCATIONSTRUCT; } break; case 33: { #line 132 "lexer" return Parser::LSP_NEEDED; } break; case 34: { #line 133 "lexer" { begin(StartCondition__::xstring); return Parser::LTYPE; } } break; case 35: { #line 137 "lexer" return Parser::NAMESPACE; } break; case 36: { #line 138 "lexer" return Parser::NEG_DOLLAR; } break; case 37: { #line 139 "lexer" return Parser::NOLINES; } break; case 38: { #line 140 "lexer" return Parser::NONASSOC; } break; case 39: { #line 141 "lexer" { begin(StartCondition__::pxstring); return Parser::PARSEFUN_SOURCE; } } break; case 40: { #line 145 "lexer" return Parser::POLYMORPHIC; } break; case 41: { #line 146 "lexer" return Parser::PREC; } break; case 42: { #line 147 "lexer" return Parser::PRINT_TOKENS; } break; case 43: { #line 148 "lexer" return Parser::REQUIRED; } break; case 44: { #line 149 "lexer" return Parser::RIGHT; } break; case 45: { #line 150 "lexer" { begin(StartCondition__::pxstring); return Parser::SCANNER; } } break; case 46: { #line 154 "lexer" { begin(StartCondition__::pxstring); return Parser::SCANNER_CLASS_NAME; } } break; case 47: { #line 158 "lexer" { begin(StartCondition__::pxstring); return Parser::SCANNER_TOKEN_FUNCTION; } } break; case 48: { #line 162 "lexer" { begin(StartCondition__::pxstring); return Parser::SCANNER_MATCHED_TEXT_FUNCTION; } } break; case 49: { #line 167 "lexer" return Parser::START; } break; case 50: { #line 168 "lexer" { begin(StartCondition__::xstring); return Parser::STYPE; } } break; case 51: { #line 172 "lexer" { begin(StartCondition__::pxstring); return Parser::TARGET_DIRECTORY; } } break; case 52: { #line 176 "lexer" return Parser::TOKEN; } break; case 53: { #line 177 "lexer" return Parser::TYPE; } break; case 54: { #line 178 "lexer" return Parser::UNION; } break; case 55: { #line 179 "lexer" return Parser::WEAK_TAGS; } break; case 56: { #line 180 "lexer" return Parser::TWO_PERCENTS; } break; case 57: { #line 182 "lexer" { begin(StartCondition__::quote); more(); } } break; case 58: { #line 187 "lexer" { begin(StartCondition__::string); more(); } } break; case 59: { #line 192 "lexer" return Parser::IDENTIFIER; } break; case 60: { #line 194 "lexer" { d_number = stoul(d_matched); return Parser::NUMBER; } } break; case 61: { #line 199 "lexer" return d_matched[0]; } break; case 62: { #line 204 "lexer" { more(); begin(StartCondition__::string); } } break; case 63: { #line 208 "lexer" { more(); begin(StartCondition__::pstring); } } break; case 64: { #line 212 "lexer" { accept(0); begin(StartCondition__::xstring); } } break; case 65: { #line 216 "lexer" return eoln(); } break; case 66: { #line 222 "lexer" { if (handleXstring(0)) return Parser::STRING; } } break; case 67: case 68: { #line 227 "lexer" more(); } break; case 69: { #line 228 "lexer" return eoln(); } break; case 70: { #line 233 "lexer" { if (handleXstring(0)) return Parser::STRING; } } break; case 71: case 72: { #line 238 "lexer" more(); } break; case 73: { #line 239 "lexer" return eoln(); } break; case 74: { #line 245 "lexer" { if (handleXstring(1)) return Parser::STRING; } } break; case 75: case 76: { #line 251 "lexer" { if (handleXstring(2)) return Parser::STRING; } } break; case 77: { #line 256 "lexer" more(); } break; case 78: { #line 260 "lexer" checkEndOfRawString(); } break; case 79: { #line 262 "lexer" more(); } break; case 81: { #line 267 "lexer" { setLineNrs(); d_commentChar[0] = '\n'; } } break; case 82: { #line 271 "lexer" { if (!d_block) begin(StartCondition__::INITIAL); else { d_block += d_commentChar; begin(StartCondition__::block); } } } break; case 83: { #line 286 "lexer" returnQuoted(&Scanner::octal); } break; case 84: { #line 288 "lexer" returnQuoted(&Scanner::hexadecimal); } break; case 85: { #line 290 "lexer" { if (d_block(d_matched)) begin(StartCondition__::block); else { begin(StartCondition__::INITIAL); escape(); return Parser::QUOTE; } } } break; case 86: { #line 301 "lexer" returnQuoted(&Scanner::matched2); } break; case 87: { #line 303 "lexer" returnQuoted(&Scanner::matched1); } break; case 88: { #line 305 "lexer" returnQuoted(&Scanner::multiCharQuote); } break; case 89: case 90: { #line 314 "lexer" more(); } break; case 91: { #line 316 "lexer" returnTypeSpec(); } break; } // $insert debug if (s_debug__) s_out__ << "Rule " << ruleIdx << " did not do 'return'" << "\n" << dflush__; noReturn__(); return 0; } catch (Leave__ value) { return static_cast(value); } int Scanner::lex__() { reset__(); preCode(); while (true) { size_t ch = get__(); // fetch next char size_t range = getRange__(ch); // determine the range updateFinals__(); // update the state's Final info switch (actionType__(range)) // determine the action { case ActionType__::CONTINUE: continue__(ch); continue; case ActionType__::MATCH: { d_token__ = executeAction__(matched__(ch)); if (return__()) { print(); postCode(PostEnum__::RETURN); return d_token__; } break; } case ActionType__::ECHO_FIRST: echoFirst__(ch); break; case ActionType__::ECHO_CH: echoCh__(ch); break; case ActionType__::RETURN: // $insert debug if (s_debug__) s_out__ << "EOF_REACHED" << "\n" << dflush__; if (!popStream()) { postCode(PostEnum__::END); return 0; } postCode(PostEnum__::POP); continue; } // switch postCode(PostEnum__::WIP); reset__(); preCode(); } // while } void ScannerBase::print__() const { } bisonc++-4.13.01/scanner/octal.cc0000644000175000017500000000040612633316117015314 0ustar frankfrank#include "scanner.ih" void Scanner::octal() { istringstream istr(d_matched.substr(2)); istr >> oct >> d_number; if (d_number > 0xff) emsg << "Quoted constant " << d_matched << " exceeds 0177" << endl; else checkZeroNumber(); } bisonc++-4.13.01/scanner/canonicalquote.cc0000644000175000017500000000033112633316117017214 0ustar frankfrank#include "scanner.ih" string const &Scanner::canonicalQuote() { ostringstream oss; oss << "'\\x" << setfill('0') << setw(2) << hex << d_number << "'"; return d_canonicalQuote = oss.str(); } bisonc++-4.13.01/skeletons/0000755000175000017500000000000012633316117014261 5ustar frankfrankbisonc++-4.13.01/skeletons/print.in0000644000175000017500000000031512633316117015744 0ustar frankfrank enum { _UNDETERMINED_ = -2 }; std::cout << "Token: " << symbol__(d_token__) << ", text: `"; if (d_token__ == _UNDETERMINED_) std::cout << "'\n"; else std::cout << \@matchedtextfunction << "'\n"; bisonc++-4.13.01/skeletons/lex.in0000644000175000017500000000012412633316117015376 0ustar frankfrankinline int \@::lex() { @printtokens print(); @end return \@tokenfunction; } bisonc++-4.13.01/skeletons/ltypedata.in0000644000175000017500000000004612633316117016600 0ustar frankfrankLTYPE__ d_loc__; LTYPE__ *d_lsp__; bisonc++-4.13.01/skeletons/bisonc++.cc0000644000175000017500000003636712633316117016212 0ustar frankfrank$insert class.ih // The FIRST element of SR arrays shown below uses `d_type', defining the // state's type, and `d_lastIdx' containing the last element's index. If // d_lastIdx contains the REQ_TOKEN bitflag (see below) then the state needs // a token: if in this state d_token__ is _UNDETERMINED_, nextToken() will be // called // The LAST element of SR arrays uses `d_token' containing the last retrieved // token to speed up the (linear) seach. Except for the first element of SR // arrays, the field `d_action' is used to determine what to do next. If // positive, it represents the next state (used with SHIFT); if zero, it // indicates `ACCEPT', if negative, -d_action represents the number of the // rule to reduce to. // `lookup()' tries to find d_token__ in the current SR array. If it fails, and // there is no default reduction UNEXPECTED_TOKEN__ is thrown, which is then // caught by the error-recovery function. // The error-recovery function will pop elements off the stack until a state // having bit flag ERR_ITEM is found. This state has a transition on _error_ // which is applied. In this _error_ state, while the current token is not a // proper continuation, new tokens are obtained by nextToken(). If such a // token is found, error recovery is successful and the token is // handled according to the error state's SR table and parsing continues. // During error recovery semantic actions are ignored. // A state flagged with the DEF_RED flag will perform a default // reduction if no other continuations are available for the current token. // The ACCEPT STATE never shows a default reduction: when it is reached the // parser returns ACCEPT(). During the grammar // analysis phase a default reduction may have been defined, but it is // removed during the state-definition phase. // So: // s_x[] = // { // [_field_1_] [_field_2_] // // First element: {state-type, idx of last element}, // Other elements: {required token, action to perform}, // ( < 0: reduce, // 0: ACCEPT, // > 0: next state) // Last element: {set to d_token__, action to perform} // } // When the --thread-safe option is specified, all static data are defined as // const. If --thread-safe is not provided, the state-tables are not defined // as const, since the lookup() function below will modify them $insert debugincludes namespace // anonymous { char const author[] = "Frank B. Brokken (f.b.brokken@rug.nl)"; enum { STACK_EXPANSION = 5 // size to expand the state-stack with when // full }; enum ReservedTokens { PARSE_ACCEPT = 0, // `ACCEPT' TRANSITION _UNDETERMINED_ = -2, _EOF_ = -1, _error_ = 256 }; enum StateType // modify statetype/data.cc when this enum changes { NORMAL, ERR_ITEM, REQ_TOKEN, ERR_REQ, // ERR_ITEM | REQ_TOKEN DEF_RED, // state having default reduction ERR_DEF, // ERR_ITEM | DEF_RED REQ_DEF, // REQ_TOKEN | DEF_RED ERR_REQ_DEF // ERR_ITEM | REQ_TOKEN | DEF_RED }; struct PI__ // Production Info { size_t d_nonTerm; // identification number of this production's // non-terminal size_t d_size; // number of elements in this production }; struct SR__ // Shift Reduce info, see its description above { union { int _field_1_; // initializer, allowing initializations // of the SR s_[] arrays int d_type; int d_token; }; union { int _field_2_; int d_lastIdx; // if negative, the state uses SHIFT int d_action; // may be negative (reduce), // postive (shift), or 0 (accept) size_t d_errorState; // used with Error states }; }; $insert 4 staticdata $insert namespace-open // If the parsing function call uses arguments, then provide an overloaded // function. The code below doesn't rely on parameters, so no arguments are // required. Furthermore, parse uses a function try block to allow us to do // ACCEPT and ABORT from anywhere, even from within members called by actions, // simply throwing the appropriate exceptions. \@Base::\@Base() : d_stackIdx__(-1), $insert 4 debuginit d_nErrors__(0), $insert 4 requiredtokens d_acceptedTokens__(d_requiredTokens__), d_token__(_UNDETERMINED_), d_nextToken__(_UNDETERMINED_) {} $insert debugfunctions void \@::print__() { $insert print } void \@Base::clearin() { d_token__ = d_nextToken__ = _UNDETERMINED_; } void \@Base::push__(size_t state) { if (static_cast(d_stackIdx__ + 1) == d_stateStack__.size()) { size_t newSize = d_stackIdx__ + STACK_EXPANSION; d_stateStack__.resize(newSize); d_valueStack__.resize(newSize); $insert 8 LTYPEresize } ++d_stackIdx__; d_stateStack__[d_stackIdx__] = d_state__ = state; *(d_vsp__ = &d_valueStack__[d_stackIdx__]) = d_val__; $insert 4 LTYPEpush $insert 4 debug "push(state " << state << stype__(", semantic TOS = ", d_val__, ")") << ')' } void \@Base::popToken__() { d_token__ = d_nextToken__; d_val__ = d_nextVal__; d_nextVal__ = STYPE__(); d_nextToken__ = _UNDETERMINED_; } void \@Base::pushToken__(int token) { d_nextToken__ = d_token__; d_nextVal__ = d_val__; d_token__ = token; } void \@Base::pop__(size_t count) { $insert 4 debug "pop(" << count << ") from stack having size " << (d_stackIdx__ + 1) if (d_stackIdx__ < static_cast(count)) { $insert 8 debug "Terminating parse(): unrecoverable input error at token " << symbol__(d_token__) ABORT(); } d_stackIdx__ -= count; d_state__ = d_stateStack__[d_stackIdx__]; d_vsp__ = &d_valueStack__[d_stackIdx__]; $insert 4 LTYPEpop $insert 4 debug "pop(): next state: " << d_state__ << ", token: " << symbol__(d_token__) + $insert 4 debug stype__("semantic: ", d_val__) } inline size_t \@Base::top__() const { return d_stateStack__[d_stackIdx__]; } void \@::executeAction(int production) try { if (d_token__ != _UNDETERMINED_) pushToken__(d_token__); // save an already available token $insert defaultactionreturn $insert 4 debug "executeAction(): of rule " << production + $insert 4 debug stype__(", semantic [TOS]: ", d_val__) << " ..." switch (production) { $insert 8 actioncases } $insert 4 debug "... action of rule " << production << " completed" + $insert 4 debug stype__(", semantic: ", d_val__) } catch (std::exception const &exc) { exceptionHandler__(exc); } inline void \@Base::reduce__(PI__ const &pi) { d_token__ = pi.d_nonTerm; pop__(pi.d_size); $insert 4 debug "reduce(): by rule " << (&pi - s_productionInfo) + $insert 4 debug " to N-terminal " << symbol__(d_token__) << stype__(", semantic = ", d_val__) } // If d_token__ is _UNDETERMINED_ then if d_nextToken__ is _UNDETERMINED_ another // token is obtained from lex(). Then d_nextToken__ is assigned to d_token__. void \@::nextToken() { if (d_token__ != _UNDETERMINED_) // no need for a token: got one return; // already if (d_nextToken__ != _UNDETERMINED_) { popToken__(); // consume pending token $insert 8 debug "nextToken(): popped " << symbol__(d_token__) << stype__(", semantic = ", d_val__) } else { ++d_acceptedTokens__; // accept another token (see // errorRecover()) d_token__ = lex(); if (d_token__ <= 0) d_token__ = _EOF_; } print(); $insert 4 debug "nextToken(): using " << symbol__(d_token__) << stype__(", semantic = ", d_val__) } // if the final transition is negative, then we should reduce by the rule // given by its positive value. Note that the `recovery' parameter is only // used with the --debug option int \@::lookup(bool recovery) { $insert 0 threading if (elementPtr == lastElementPtr) // reached the last element { if (elementPtr->d_action < 0) // default reduction { $insert 8 debug "lookup(" << d_state__ << ", " << symbol__(d_token__) + $insert 8 debug "): default reduction by rule " << -elementPtr->d_action return elementPtr->d_action; } $insert 8 debug "lookup(" << d_state__ << ", " << symbol__(d_token__) << "): Not " + $insert 8 debug "found. " << (recovery ? "Continue" : "Start") << " error recovery." // No default reduction, so token not found, so error. throw UNEXPECTED_TOKEN__; } // not at the last element: inspect the nature of the action // (< 0: reduce, 0: ACCEPT, > 0: shift) int action = elementPtr->d_action; $insert 0 debuglookup return action; } // When an error has occurred, pop elements off the stack until the top // state has an error-item. If none is found, the default recovery // mode (which is to abort) is activated. // // If EOF is encountered without being appropriate for the current state, // then the error recovery will fall back to the default recovery mode. // (i.e., parsing terminates) void \@::errorRecovery() try { if (d_acceptedTokens__ >= d_requiredTokens__)// only generate an error- { // message if enough tokens ++d_nErrors__; // were accepted. Otherwise error("Syntax error"); // simply skip input $insert 8 errorverbose } $insert 4 debug "errorRecovery(): " << d_nErrors__ << " error(s) so far. State = " << top__() // get the error state while (not (s_state[top__()][0].d_type & ERR_ITEM)) { $insert 8 debug "errorRecovery(): pop state " << top__() pop__(); } $insert 4 debug "errorRecovery(): state " << top__() << " is an ERROR state" // In the error state, lookup a token allowing us to proceed. // Continuation may be possible following multiple reductions, // but eventuall a shift will be used, requiring the retrieval of // a terminal token. If a retrieved token doesn't match, the catch below // will ensure the next token is requested in the while(true) block // implemented below: int lastToken = d_token__; // give the unexpected token a // chance to be processed // again. pushToken__(_error_); // specify _error_ as next token push__(lookup(true)); // push the error state d_token__ = lastToken; // reactivate the unexpected // token (we're now in an // ERROR state). bool gotToken = true; // the next token is a terminal while (true) { try { if (s_state[d_state__]->d_type & REQ_TOKEN) { gotToken = d_token__ == _UNDETERMINED_; nextToken(); // obtain next token } int action = lookup(true); if (action > 0) // push a new state { push__(action); popToken__(); $insert 16 debug "errorRecovery() SHIFT state " << action + $insert 16 debug ", continue with " << symbol__(d_token__) if (gotToken) { $insert 20 debug "errorRecovery() COMPLETED: next state " + $insert 20 debug action << ", no token yet" d_acceptedTokens__ = 0; return; } } else if (action < 0) { // no actions executed on recovery but save an already // available token: if (d_token__ != _UNDETERMINED_) pushToken__(d_token__); // next token is the rule's LHS reduce__(s_productionInfo[-action]); $insert 16 debug "errorRecovery() REDUCE by rule " << -action + $insert 16 debug ", token = " << symbol__(d_token__) } else ABORT(); // abort when accepting during // error recovery } catch (...) { if (d_token__ == _EOF_) ABORT(); // saw inappropriate _EOF_ popToken__(); // failing token now skipped } } } catch (ErrorRecovery__) // This is: DEFAULT_RECOVERY_MODE { ABORT(); } // The parsing algorithm: // Initially, state 0 is pushed on the stack, and d_token__ as well as // d_nextToken__ are initialized to _UNDETERMINED_. // // Then, in an eternal loop: // // 1. If a state does not have REQ_TOKEN no token is assigned to // d_token__. If the state has REQ_TOKEN, nextToken() is called to // determine d_nextToken__ and d_token__ is set to // d_nextToken__. nextToken() will not call lex() unless d_nextToken__ is // _UNDETERMINED_. // // 2. lookup() is called: // d_token__ is stored in the final element's d_token field of the // state's SR_ array. // // 3. The current token is looked up in the state's SR_ array // // 4. Depending on the result of the lookup() function the next state is // shifted on the parser's stack, a reduction by some rule is applied, // or the parsing function returns ACCEPT(). When a reduction is // called for, any action that may have been defined for that // reduction is executed. // // 5. An error occurs if d_token__ is not found, and the state has no // default reduction. Error handling was described at the top of this // file. int \@::parse() try { $insert 4 debug "parse(): Parsing starts" push__(0); // initial state clearin(); // clear the tokens. while (true) { $insert 8 debug "==" try { if (s_state[d_state__]->d_type & REQ_TOKEN) nextToken(); // obtain next token int action = lookup(false); // lookup d_token__ in d_state__ if (action > 0) // SHIFT: push a new state { push__(action); popToken__(); // token processed } else if (action < 0) // REDUCE: execute and pop. { executeAction(-action); // next token is the rule's LHS reduce__(s_productionInfo[-action]); } else ACCEPT(); } catch (ErrorRecovery__) { errorRecovery(); } } } catch (Return__ retValue) { $insert 4 debug "parse(): returns " << retValue return retValue; } $insert namespace-close bisonc++-4.13.01/skeletons/bisonc++base.h0000644000175000017500000000431312633316117016671 0ustar frankfrank#ifndef \@$Base_h_included #define \@$Base_h_included #include #include #include $insert preincludes $insert debugincludes namespace // anonymous { struct PI__; } $insert namespace-open $insert polymorphic class \@Base { public: $insert tokens $insert LTYPE $insert STYPE private: int d_stackIdx__; std::vector d_stateStack__; std::vector d_valueStack__; $insert LTYPEstack protected: enum Return__ { PARSE_ACCEPT__ = 0, // values used as parse()'s return values PARSE_ABORT__ = 1 }; enum ErrorRecovery__ { DEFAULT_RECOVERY_MODE__, UNEXPECTED_TOKEN__, }; bool d_debug__; size_t d_nErrors__; size_t d_requiredTokens__; size_t d_acceptedTokens__; int d_token__; int d_nextToken__; size_t d_state__; STYPE__ *d_vsp__; STYPE__ d_val__; STYPE__ d_nextVal__; $insert LTYPEdata \@Base(); $insert debugdecl void ABORT() const; void ACCEPT() const; void ERROR() const; void clearin(); bool debug() const; void pop__(size_t count = 1); void push__(size_t nextState); void popToken__(); void pushToken__(int token); void reduce__(PI__ const &productionInfo); void errorVerbose__(); size_t top__() const; public: void setDebug(bool mode); }; inline bool \@Base::debug() const { return d_debug__; } inline void \@Base::setDebug(bool mode) { d_debug__ = mode; } inline void \@Base::ABORT() const { $insert 4 debug "ABORT(): Parsing unsuccessful" throw PARSE_ABORT__; } inline void \@Base::ACCEPT() const { $insert 4 debug "ACCEPT(): Parsing successful" throw PARSE_ACCEPT__; } inline void \@Base::ERROR() const { $insert 4 debug "ERROR(): Forced error condition" throw UNEXPECTED_TOKEN__; } $insert polymorphicInline // As a convenience, when including ParserBase.h its symbols are available as // symbols in the class Parser, too. #define \@ \@Base $insert namespace-close #endif bisonc++-4.13.01/skeletons/ltype.in0000644000175000017500000000024412633316117015746 0ustar frankfrank@ltype \@ltype @else struct LTYPE__ { int timestamp; int first_line; int first_column; int last_line; int last_column; char *text; }; @end bisonc++-4.13.01/skeletons/debugfunctions3.in0000644000175000017500000000075012633316117017715 0ustar frankfrankvoid \@Base::errorVerbose__() { std::cout << "Parser State stack containing " << (d_stackIdx__ + 1) << "elements:\n" "Each line shows a stack index followed " "by the value of that stack element\n"; for (size_t idx = d_stackIdx__ + 1; idx--; ) std::cout << std::setw(2) << idx << ": " << std::setw(3) << d_stateStack__[idx] << '\n'; } bisonc++-4.13.01/skeletons/threading.in0000644000175000017500000000137212633316117016561 0ustar frankfrank@thread-safe SR__ const *sr = s_state[d_state__]; // get the appropriate state-table int lastIdx = sr->d_lastIdx; // sentinel-index in the SR_ array SR__ const *lastElementPtr = sr + lastIdx; SR__ const *elementPtr = sr + 1; // start the search at s_xx[1] while (elementPtr != lastElementPtr && elementPtr->d_token != d_token__) ++elementPtr; @else SR__ *sr = s_state[d_state__]; // get the appropriate state-table int lastIdx = sr->d_lastIdx; // sentinel-index in the SR__ array SR__ *lastElementPtr = sr + lastIdx; lastElementPtr->d_token = d_token__; // set search-token SR__ *elementPtr = sr + 1; // start the search at s_xx[1] while (elementPtr->d_token != d_token__) ++elementPtr; @end bisonc++-4.13.01/skeletons/bisonc++.h0000644000175000017500000000142412633316117016036 0ustar frankfrank#ifndef \@$_h_included #define \@$_h_included $insert baseclass $insert scanner.h $insert namespace-open #undef \@ class \@: public \@Base { $insert 4 scannerobject public: int parse(); private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); void print__(); void exceptionHandler__(std::exception const &exc); }; $insert namespace-close #endif bisonc++-4.13.01/skeletons/bisonc++polymorphic0000644000175000017500000000665512633316117020111 0ustar frankfranknamespace Meta__ { template struct TypeOf; template struct TagOf; $insert polymorphicSpecializations // The Base class: // Individual semantic value classes are derived from this class. // This class offers a member returning the value's Tag__ // and two member templates get() offering const/non-const access to // the actual semantic value type. class Base { Tag__ d_tag; protected: Base(Tag__ tag); public: Base(Base const &other) = delete; Tag__ tag() const; template typename TypeOf::type &get(); template typename TypeOf::type const &get() const; }; // The class Semantic is derived from Base. It stores a particular // semantic value type. template class Semantic: public Base { typedef typename TypeOf::type DataType; DataType d_data; public: // The constructor forwards arguments to d_data, allowing // it to be initialized using whatever constructor is // available for DataType template Semantic(Params &&...params); DataType &data(); DataType const &data() const; }; // If Type is default constructible, Initializer::value is // initialized to new Type, otherwise it's initialized to 0, allowing // struct SType: public std::shared_ptr to initialize its // shared_ptr class whether or not Base is default // constructible. template struct Initializer { static Type *value; }; template Type *Initializer::value = new Type; template struct Initializer { static constexpr Type *value = 0; }; // The class Stype wraps the shared_ptr holding a pointer to Base. // It becomes the polymorphic STYPE__ // It also wraps Base's get members, allowing constructions like // $$.get to be used, rather than $$->get. // Its operator= can be used to assign a Semantic * // directly to the SType object. The free functions (in the parser's // namespace (if defined)) semantic__ can be used to obtain a // Semantic *. struct SType: public std::shared_ptr { SType(); template SType &operator=(Tp_ &&value); Tag__ tag() const; // this get()-member checks for 0-pointer and correct tag // in shared_ptr, and resets the shared_ptr's Base * // to point to Meta::__Semantic() if not template typename TypeOf::type &get(); template typename TypeOf::type const &get() const; // the data()-member does not check, and may result in a // segfault if used incorrectly template typename TypeOf::type &data(); template typename TypeOf::type const &data() const; template void emplace(Args &&...args); }; } // namespace Meta__ bisonc++-4.13.01/skeletons/debugfunctions1.in0000644000175000017500000000103612633316117017711 0ustar frankfrankstd::ostringstream \@Base::s_out__; std::ostream &\@Base::dflush__(std::ostream &out) { std::ostringstream &s_out__ = dynamic_cast(out); std::cout << " " << s_out__.str() << std::flush; s_out__.clear(); s_out__.str(""); return out; } std::string \@Base::stype__(char const *pre, STYPE__ const &semVal, char const *post) const { @insert-stype using namespace std; ostringstream ostr; ostr << pre << semVal << post; return ostr.str(); @else return ""; @end } bisonc++-4.13.01/skeletons/bisonc++polymorphic.inline0000644000175000017500000000615312633316117021357 0ustar frankfranknamespace Meta__ { inline Base::Base(Tag__ tag) : d_tag(tag) {} inline Tag__ Base::tag() const { return d_tag; } template template inline Semantic::Semantic(Params &&...params) : Base(tg_), d_data(std::forward(params) ...) {} template inline typename TypeOf::type &Semantic::data() { return d_data; } template inline typename TypeOf::type const &Semantic::data() const { return d_data; } template inline typename TypeOf::type &Base::get() { return static_cast *>(this)->data(); } template inline typename TypeOf::type const &Base::get() const { return static_cast *>(this)->data(); } inline SType::SType() : std::shared_ptr{ Initializer< std::is_default_constructible::value, Base >::value } {} inline Tag__ SType::tag() const { return std::shared_ptr::get()->tag(); } template inline typename TypeOf::type &SType::get() { // if we're not yet holding a (tg_) value, initialize to // a Semantic holding a default value if (std::shared_ptr::get() == 0 || tag() != tg_) { typedef Semantic SemType; if (not std::is_default_constructible< typename TypeOf::type >::value ) throw std::runtime_error( "STYPE::get: no default constructor available"); reset(new SemType); } return std::shared_ptr::get()->get(); } template inline typename TypeOf::type &SType::data() { return std::shared_ptr::get()->get(); } template inline typename TypeOf::type const &SType::data() const { return std::shared_ptr::get()->get(); } template inline void SType::emplace(Params &&...params) { reset(new Semantic(std::forward(params) ...)); } template struct Assign; template struct Assign { static SType &assign(SType *lhs, Tp_ &&tp); }; template struct Assign { static SType &assign(SType *lhs, Tp_ const &tp); }; template <> struct Assign { static SType &assign(SType *lhs, SType const &tp); }; template inline SType &Assign::assign(SType *lhs, Tp_ &&tp) { lhs->reset(new Semantic::tag>(std::move(tp))); return *lhs; } template inline SType &Assign::assign(SType *lhs, Tp_ const &tp) { lhs->reset(new Semantic::tag>(tp)); return *lhs; } inline SType &Assign::assign(SType *lhs, SType const &tp) { return lhs->operator=(tp); } template inline SType &SType::operator=(Tp_ &&rhs) { return Assign< std::is_rvalue_reference::value, typename std::remove_reference::type >::assign(this, std::forward(rhs)); } } // namespace Meta__ bisonc++-4.13.01/skeletons/debugdecl.in0000644000175000017500000000035312633316117016530 0ustar frankfrankstatic std::ostringstream s_out__; std::string symbol__(int value) const; std::string stype__(char const *pre, STYPE__ const &semVal, char const *post = "") const; static std::ostream &dflush__(std::ostream &out); bisonc++-4.13.01/skeletons/debugfunctions2.in0000644000175000017500000000066512633316117017721 0ustar frankfrankstd::string \@Base::symbol__(int value) const { using namespace std; ostringstream ostr; SMap::const_iterator it = s_symbol.find(value); if (it != s_symbol.end()) ostr << '\'' << it->second << '\''; else if (isprint(value)) ostr << '`' << static_cast(value) << "' (" << value << ')'; else ostr << "'\\x" << setfill('0') << hex << setw(2) << value << '\''; return ostr.str(); } bisonc++-4.13.01/skeletons/bisonc++.ih0000644000175000017500000000137612633316117016215 0ustar frankfrank // Include this file in the sources of the class \@. $insert class.h $insert namespace-open inline void \@::error(char const *msg) { std::cerr << msg << '\n'; } $insert lex inline void \@::print() { print__(); // displays tokens if --print was specified } inline void \@::exceptionHandler__(std::exception const &exc) { throw; // re-implement to handle exceptions thrown by actions } $insert namespace-close // Add here includes that are only required for the compilation // of \@'s sources. $insert namespace-use // UN-comment the next using-declaration if you want to use // int \@'s sources symbols from the namespace std without // specifying std:: //using namespace std; bisonc++-4.13.01/skeletons/debuglookup.in0000644000175000017500000000063512633316117017135 0ustar frankfrankif (d_debug__) { s_out__ << "lookup(" << d_state__ << ", " << symbol__(d_token__); if (action < 0) // a reduction was found s_out__ << "): reduce by rule " << -action; else if (action == 0) s_out__ << "): ACCEPT"; else s_out__ << "): shift " << action << " (" << symbol__(d_token__) << " processed)"; s_out__ << "\n" << dflush__; } bisonc++-4.13.01/skeletons/debugincludes.in0000644000175000017500000000014512633316117017426 0ustar frankfrank#include #include #include #include #include bisonc++-4.13.01/srconflict/0000755000175000017500000000000012633316117014420 5ustar frankfrankbisonc++-4.13.01/srconflict/handlesrconflict.cc0000644000175000017500000000373112633316117020255 0ustar frankfrank#include "srconflict.ih" // called fm processshiftreduceconflict.cc // A conflict was observed as item 'reducibleItemIdx' and // 'shiftableItemIdx' have identical LA symbols // void SRConflict::handleSRconflict( size_t shiftableItemIdx, Next::ConstIter const &next, size_t reducibleItemIdx) { typedef Enum::Solution Solution; StateItem::Vector const &itemVector = d_itemVector; Symbol const *precedence = itemVector[reducibleItemIdx].precedence(); bool forced = false; Solution solution; // the reducible item does not if (precedence == 0) // have an explicit precedence { // solution = Solution::REDUCE; // force a reduction solution = Solution::SHIFT; // force a shift forced = true; ++s_nConflicts; // and a conflict } else // otherwise try to solve by { // precedence or association solution = next->solveByPrecedence(precedence); if (solution == Solution::UNDECIDED) solution = next->solveByAssociation(); } switch (solution) // perform SHIFT or REDUCE { case Solution::REDUCE: d_rmShift.push_back( RmShift( next - d_nextVector.begin(), forced) ); return; case Solution::UNDECIDED: forced = true; ++s_nConflicts; break; default: // case SHIFT: break; } d_rmReduction.push_back( RmReduction(reducibleItemIdx, next->next(), next->symbol(), forced ) ); } bisonc++-4.13.01/srconflict/srconflict.ih0000644000175000017500000000030012633316117017101 0ustar frankfrank#include "srconflict.h" #include #include #include "../enumsolution/enumsolution.h" #include "../rules/rules.h" using namespace std; using namespace FBB; bisonc++-4.13.01/srconflict/srconflict1.cc0000644000175000017500000000045512633316117017162 0ustar frankfrank#include "srconflict.ih" SRConflict::SRConflict(Next::Vector const &nextVector, StateItem::Vector const &itemVector, std::vector const &reducible) : d_nextVector(nextVector), d_itemVector(itemVector), d_reducible(reducible) {} bisonc++-4.13.01/srconflict/removeshifts.cc0000644000175000017500000000034712633316117017451 0ustar frankfrank#include "srconflict.ih" size_t SRConflict::removeShifts(Next::Vector &nextVector) { size_t nRemoved = 0; for (auto &rmShift: d_rmShift) Next::removeShift(rmShift, nextVector, &nRemoved); return nRemoved; } bisonc++-4.13.01/srconflict/insert.cc0000644000175000017500000000110412633316117016227 0ustar frankfrank#include "srconflict.ih" ostream &SRConflict::insert(ostream &out) const { RmReduction::ConstIter iter = d_rmReduction.begin(); RmReduction::ConstIter end = d_rmReduction.end(); while ((iter = find_if(iter, end, RmReduction::isForced)) != end) { out << "Solved SR CONFLICT on " << iter->symbol() << ":\n" "\tshift to state " << iter->next() << ", removed " << iter->symbol() << " from LA-set of rule " << d_itemVector[iter->idx()].nr() << ")\n"; ++iter; } return out; } bisonc++-4.13.01/srconflict/visitreduction.cc0000644000175000017500000000531412633316117020005 0ustar frankfrank#include "srconflict.ih" // Assume the itemVector of a state is as follows (StateItems): // 0: [P11 3] expression -> expression '-' expression . // { EOLN '+' '-' '*' '/' ')' } 0, 1, () -1 // 1: [P10 1] expression -> expression . '+' expression // { EOLN '+' '-' '*' '/' ')' } 0, 0, () 0 // 2: [P11 1] expression -> expression . '-' expression // { EOLN '+' '-' '*' '/' ')' } 0, 0, () 1 // 3: [P12 1] expression -> expression . '*' expression // { EOLN '+' '-' '*' '/' ')' } 0, 0, () 2 // 4: [P13 1] expression -> expression . '/' expression // { EOLN '+' '-' '*' '/' ')' } 0, 0, () 3 // // and the associated nextVector is: // // 0: On '+' to state 15 with (1 ) // 1: On '-' to state 16 with (2 ) // 2: On '*' to state 17 with (3 ) // 3: On '/' to state 18 with (4 ) // // // Conflicts are inspected for all reducible elements, so for index element 0 // in the above example, as follows: // // 1. The nextVector's symbols are searched in the LA set of the reduction // item (so, subsequently '+', '-', '*' and '/' are searched in the LA // set of itemVector[0]. // 2. In this case, all are found and depending on the token's priority // and the rule's priority either a shift or a reduce is selected. // // Production rules received their priority setting either explicitly (using // %prec) or from their first terminal token. See also // rules/updateprecedences.cc // // What happens if neither occurs? In a rule like 'expr: term' there is no // first terminal token and there is no %prec being used. // In these cases the rule automatically receives the highest precedence and // a shift/reduce conflict is reported (as pointed out by Ramanand // Mandayam). // idx is the index of a reducible item. That item can be reached as // d_itemVector[idx] void SRConflict::visitReduction(size_t reducibleIdx) { auto nextIter = d_nextVector.begin(); auto reducibleLAset = d_itemVector[reducibleIdx].lookaheadSet(); while (true) { nextIter = // check whether a nextVector symbol find_if( // is in the reduction item's LA set. nextIter, d_nextVector.end(), // if so, there is a S/R [&](Next const &next) // conflict, solved below { return next.inLAset(reducibleLAset); } ); if (nextIter == d_nextVector.end()) return; processShiftReduceConflict(nextIter, reducibleIdx); ++nextIter; } } bisonc++-4.13.01/srconflict/showconflicts.cc0000644000175000017500000000154312633316117017617 0ustar frankfrank#include "srconflict.ih" void SRConflict::showConflicts(Rules const &rules) const { RmReduction::ConstIter iter = d_rmReduction.begin(); RmReduction::ConstIter end = d_rmReduction.end(); unordered_map> conflict; while ((iter = find_if(iter, end, RmReduction::isForced)) != end) { conflict[d_itemVector[iter->idx()].nr()].push_back(iter->symbol()); ++iter; } if (conflict.empty()) return; for (auto &rule: conflict) { auto prodPtr = rules.productions()[rule.first - 1]; wmsg << " rule " << rule.first << " (" << prodPtr->fileName() << ", line " << prodPtr->lineNr() << "): shifts at "; for (Symbol const *symbol: rule.second) wmsg << symbol << ", "; } wmsg << '\n'; } bisonc++-4.13.01/srconflict/processshiftreduceconflict.cc0000644000175000017500000000110212633316117022347 0ustar frankfrank#include "srconflict.ih" // called by visitreduction.cc // // Next::ConstIter const &next: // the element in an item's next-set that has a LA symbol that is also // found in the LA set of a reducible item // // size_t reducibleItemIdx: the index of the reducible item. void SRConflict::processShiftReduceConflict(Next::ConstIter const &next, size_t reducibleItemIdx) { for (auto shiftableItemIdx: next->kernel()) handleSRconflict(shiftableItemIdx, next, reducibleItemIdx); } bisonc++-4.13.01/srconflict/data.cc0000644000175000017500000000007312633316117015640 0ustar frankfrank#include "srconflict.ih" size_t SRConflict::s_nConflicts; bisonc++-4.13.01/srconflict/removereductions.cc0000644000175000017500000000026512633316117020327 0ustar frankfrank#include "srconflict.ih" void SRConflict::removeReductions(StateItem::Vector &itemVector) { for (auto &rm: d_rmReduction) StateItem::removeReduction(rm, itemVector); } bisonc++-4.13.01/srconflict/inspect.cc0000644000175000017500000000053512633316117016377 0ustar frankfrank#include "srconflict.ih" // Each reducible item index is passed to visitReduction() which handles an // observed S/R conflict. See README.states-and-conflicts for a description of // the conflict resolution process. // called fm: State::checkConflicts void SRConflict::inspect() { for (auto idx: d_reducible) visitReduction(idx); } bisonc++-4.13.01/srconflict/frame0000644000175000017500000000005312633316117015433 0ustar frankfrank#include "srconflict.ih" SRConflict:: { } bisonc++-4.13.01/srconflict/srconflict.h0000644000175000017500000000504512633316117016743 0ustar frankfrank#ifndef _INCLUDED_SRCONFLICT_ #define _INCLUDED_SRCONFLICT_ #include #include #include "../rmreduction/rmreduction.h" #include "../next/next.h" class Rules; class SRConflict { friend std::ostream &operator<<(std::ostream &out, SRConflict const &conflict); Next::Vector const &d_nextVector; // the Next objects describing // states to transit to StateItem::Vector const &d_itemVector; // the items which have S/R // conflicts with the // reducible items std::vector const &d_reducible; // the indices of the // reducible rules RmReduction::Vector d_rmReduction; // Vector of reducible rules // to remove for this conflict RmShift::Vector d_rmShift; // vector of indices of items // having shift conflicts that // are removed static size_t s_nConflicts; // the number of S/R conflicts // that could not be solved by // preference/association // decisions. public: SRConflict(Next::Vector const &next, StateItem::Vector const &stateItem, std::vector const &reducible); void inspect(); // returns # of shifts that were removed size_t removeShifts(Next::Vector &nextVector); void removeReductions(StateItem::Vector &itemVector); static size_t nConflicts(); void showConflicts(Rules const &rules) const; private: std::ostream &insert(std::ostream &out) const; void processShiftReduceConflict(Next::ConstIter const &next, size_t itemIdx); void handleSRconflict(size_t shiftableItemIdx, Next::ConstIter const &next, size_t reducibleItemIdx); void visitReduction(size_t idx); }; inline size_t SRConflict::nConflicts() { return s_nConflicts; } inline std::ostream &operator<<(std::ostream &out, SRConflict const &conflict) { return conflict.insert(out); } #endif bisonc++-4.13.01/state/0000755000175000017500000000000012633316117013372 5ustar frankfrankbisonc++-4.13.01/state/computelasets.cc0000644000175000017500000000070112633316117016567 0ustar frankfrank#include "state.ih" // compute the LA sets of the items of a state. // starting from the state's kernel items distribute their LA sets over the // remaining items. See documentation/manual/algorithm/determine.yo void State::computeLAsets() { for ( auto kernelItem = &d_itemVector[0], end = kernelItem + d_nKernelItems; kernelItem != end; ++kernelItem ) distributeLAsetOf(*kernelItem); } bisonc++-4.13.01/state/state1.cc0000644000175000017500000000060212633316117015100 0ustar frankfrank#include "state.ih" State::State(size_t idx) : d_nKernelItems(0), d_nTransitions(0), d_nReductions(0), d_defaultReducible(string::npos), d_maxLAsize(0), d_summedLAsize(0), d_idx(idx), d_nTerminalTransitions(0), d_srConflict(d_nextVector, d_itemVector, d_reducible), d_rrConflict(d_itemVector, d_reducible), d_stateType(StateType::NORMAL) {} bisonc++-4.13.01/state/construct.cc0000644000175000017500000000046012633316117015725 0ustar frankfrank#include "state.ih" // the kernel item of the first state has already been set bij initialState(). // setItems adds the items implied by the kernel item(s), and transitions to // the next states. void State::construct() { setItems(); for (auto &next: d_nextVector) nextState(next); } bisonc++-4.13.01/state/inspecttransitions.cc0000644000175000017500000000356012633316117017650 0ustar frankfrank#include "state.ih" // All Next objects in the state's d_nextVector are inspected. The Next // objects hold // - the state index of a state to transfer to from the current state // - a size_t vector of item transitions. Each element is the index of an // item in the current state (the source-item), its index is the index of a // (kernel) item of the state to transfer to (the destination index). // If the LA set of the destination item is enlarged using the LA set of the // source item then then the LA sets of the destination state's items must be // recomputed. This is realized by inserting the destation state's index into // the `todo' set. void State::inspectTransitions(set &todo) { for (Next const &next: d_nextVector) { size_t nextStateIdx = next.next(); if (nextStateIdx == string::npos) // no transitions: try the continue; // next Next object // state to transfer to State &nextState = *s_state[nextStateIdx]; // visit all source-destination for ( // item pairs size_t nextIdx = 0, end = next.kernel().size(); nextIdx != end; ++nextIdx ) // if the dest. item's LA set is { // enlarged, recompute the LA sets if ( // of the dest. state's items. nextState.d_itemVector[nextIdx].enlargeLA( d_itemVector[next.kernel()[nextIdx]].lookaheadSet() ) ) todo.insert(nextStateIdx); // kernel items were enlarged then // recompute its LA sets } } } bisonc++-4.13.01/state/setitems.cc0000644000175000017500000000520612633316117015541 0ustar frankfrank#include "state.ih" // Walk over all state items. See the tables below for an explanation // // Production rules of N-symbols following the dot-position (e.g., S, L, R) // are added to d_itemVector. // // Each of these N symbols will be added to the state's d_nextVector (unless // it's already there). // // If the symbol N is already in d_nextVector (addDependent()): // 1. the index of the item containing N (e.g., 0 for S, 1 for L) is // added to d_nextVector // 2 the index of the N symbol in d_nextVector (e.g., 0 for S) is stored // in the itemVector's element containing the production rule of the N // symbol (for S: offset in d_next in d_itemVector[0] is 0). // 3. the indices of all stateitems starting at N are stored in d_child: // these items depend on the item having N at dot (e.g., for S: 1, 2) // // If the symbol is a new symbol (addNext()): // 1. The N symbol's d_itemVector element's d_next offset is given the // size of d_nextVector (e.g., for S: 0, since there's no d_next // yet. In general: d_nextVector's size is the index of the next // element to be added to d_nextVector. // 2. a new element is added to d_nextVector, containing the symbol, the // items's stateType, and the item's index in d_itemVector (e.g., for // S: Next(S, 0, type) // // Terminal symbols will simply be added to d_nextVector, together with their // d_itemVector offsets (as they don't have production rules) // // StateItems (in d_itemVector): // ------------------------------------------------------------ // item LA-set dependent offet in // stateitems d_next // ------------------------------------------------ // S* -> . S, EOF, (1, 2) 0 // S -> . L = R (3, ) 1 // S -> . R // L -> . * R // ... (etc) // ------------------------------------------------ // // Moreover, the Next vector will be adapted: // // Next (in d_nextVector) // ------------------------------- // On next next kernel // Symbol state from items // ------------------------------- // S ? (0, 1) // L // R // * // ... // ------------------------------- void State::setItems() { size_t idx = 0; // do not use a for stmnt here, as // notReducible may extend d_itemVector while (idx < d_itemVector.size()) { if (d_itemVector[idx].isReducible()) d_reducible.push_back(idx); else notReducible(idx); ++idx; // inspect the next element } } bisonc++-4.13.01/state/addproductions.cc0000644000175000017500000000147412633316117016731 0ustar frankfrank#include "state.ih" // d_nextVector.back() is the entry containing the symbol whose production // rules are added to d_itemVector. // idx is the current index in d_itemVector // // All productions of `symbol' are determined, and each is handled by // addProduction which will add the production to d_itemVector, adding its // index to stateItem[idx] dependent-list // // d_nextVector.back()'s kernel adds idx (as the item at d_itemVector[idx]) to // its kernel-list. void State::addProductions(Symbol const *symbol, size_t idx) { // obtain all productions of `symbol' Production::Vector const &productions = NonTerminal::downcast(symbol)->productions(); for (auto production: productions) StateItem::addProduction(production, d_itemVector, idx); } bisonc++-4.13.01/state/nextfindfrom.cc0000644000175000017500000000055612633316117016412 0ustar frankfrank#include "state.ih" Next::ConstIter State::nextFind(Symbol const *symbol) const { return find_if( d_nextVector.begin(), d_nextVector.end(), [=](Next const &next) { return next.hasSymbol(symbol); } ); // FnWrap::unary(Next::hasSymbol, symbol)); } bisonc++-4.13.01/state/summarizeactions.cc0000644000175000017500000000152512633316117017301 0ustar frankfrank#include "state.ih" void State::summarizeActions() { if ((d_nReductions = d_reducible.size()) != 0) { for (size_t idx = 0; idx != d_nReductions; ++idx) { StateItem const &stateItem = d_itemVector[idx]; size_t laSize = stateItem.lookaheadSetSize(); if (laSize < d_maxLAsize) // add small LA sets to d_summedLAsize += laSize; // summedLAsize else // default LA set becomes the { // largest and doesn't count d_summedLAsize += d_maxLAsize; // for summedLAsize, since d_maxLAsize = laSize; // all its elements collapse d_defaultReducible = idx; // into one element. } } } } bisonc++-4.13.01/state/nextterminal.cc0000644000175000017500000000053112633316117016412 0ustar frankfrank#include "state.ih" Symbol const *State::nextTerminal(size_t *idxPtr) const { size_t idx = *idxPtr; while (idx < d_nextVector.size()) { Symbol const *ret = d_nextVector[idx].symbol(); if (ret->isTerminal()) { *idxPtr = idx; return ret; } ++idx; } return 0; } bisonc++-4.13.01/state/initialstate.cc0000644000175000017500000000063012633316117016372 0ustar frankfrank#include "state.ih" void State::initialState() { // construct the initial state. Start with S' -> . S // The start rule is obtained from Production::start() State &state = newState(); // Add the start production to the StateItem::Vector state.addKernelItem(StateItem(Item(Production::start()))); state.d_itemVector[0].setLA(LookaheadSet(LookaheadSet::e_withEOF)); } bisonc++-4.13.01/state/allstates.cc0000644000175000017500000000335512633316117015703 0ustar frankfrank#include "state.ih" void State::allStates() { if (!imsg.isActive()) return; imsg << "\n" "Grammar States: " << endl; if (s_insert == &State::insertExt) imsg << "\n" "For each state information like the following is shown for its items:\n" " 0: [P1 1] S -> C . C { } 0\n" "which should be read as follows:\n" " 0: The item's index\n" " [P1 1]: The rule (production) number and current dot-position\n" " S -> C . C: The item (lhs -> Recognized-symbols . " "symbols-to-recognize)\n" " { } The item's lookahead (LA) set\n" " 0 The next-element (shown below the items) describing the\n" " action associated with this item (-1 for reducible " "items)\n" "\n" "The Next tables show entries like:\n" " 0: On C to state 5 with (0 )\n" "meaning:\n" " 0: The Next table's index\n" " On C to state 5: When C was recognized, continue at state 5\n" " with (0 ) The item(s) whose dot is shifted at the next state\n" "Indices (like 0:) may be annotated as follows:\n" " 0 (AUTO REMOVED by S/R resolution): On C ...\n" " in which case a reduction using a production with unspecified\n" " precedence took priority;\n" "or:\n" " 0 (removed by precedence): On C ...\n" " in which case a production rule's precedence took priority\n" "Also, reduction item(s) may be listed\n" "\n" "\n"; copy(s_state.begin(), s_state.end(), ostream_iterator(imsg, "\n")); imsg << endl; } bisonc++-4.13.01/state/nextstate.cc0000644000175000017500000000120412633316117015715 0ustar frankfrank#include "state.ih" void State::nextState(Next &next) { if (next.next() != string::npos) // state is already defined return; // if next indicates so. Item::Vector kernel; // build a new kernel next.buildKernel(&kernel, d_itemVector); size_t idx = findKernel(kernel); // return the next State's idx next.setNext(idx); // set the next state to go to from // here on Next's symbol if (idx == s_state.size()) addState(kernel); // create and add a new state } bisonc++-4.13.01/state/insertstd.cc0000644000175000017500000000100512633316117015714 0ustar frankfrank#include "state.ih" ostream &State::insertStd(ostream &out) const { out << "State " << d_idx << ":\n"; for (size_t idx = 0; idx != d_nKernelItems; ++idx) out << d_itemVector[idx] << '\n'; for (size_t idx = 0; idx != d_nextVector.size(); ++idx) out << " " << idx << d_nextVector[idx] << '\n'; for (size_t idx = 0; idx != d_reducible.size(); ++idx) out << " Reduce by " << d_itemVector[d_reducible[idx]] << '\n'; return out << d_srConflict << d_rrConflict << '\n'; } bisonc++-4.13.01/state/addstate.cc0000644000175000017500000000027212633316117015473 0ustar frankfrank#include "state.ih" void State::addState(Item::Vector const &kernel) { State &state = newState(); for (auto &item: kernel) state.addKernelItem(StateItem(item)); } bisonc++-4.13.01/state/addnext.cc0000644000175000017500000000074312633316117015334 0ustar frankfrank#include "state.ih" void State::addNext(Symbol const *symbol, size_t idx) { d_itemVector[idx].setNext(d_nextVector.size()); // then add symbol to d_nextVector d_nextVector.push_back(Next(symbol, idx)); // add all production rules of `symbol' to // d_itemVector. if (symbol->isNonTerminal()) // set dependent elements addProductions(symbol, idx); else ++d_nTerminalTransitions; } bisonc++-4.13.01/state/data.cc0000644000175000017500000000021412633316117014607 0ustar frankfrank#include "state.ih" vector State::s_state; State *State::s_acceptState; ostream &(State::*State::s_insert)(ostream &out) const; bisonc++-4.13.01/state/determinelasets.cc0000644000175000017500000001000312633316117017063 0ustar frankfrank#include "state.ih" // The LA sets of the items of a state (state tt(idx)) are computed by first // computing the LA sets of its items, followed by propagating the LA sets of // items to the states for which state transitions have been defined. // // 1. A set (todo) contains the indices of the states whose LA sets must be // (re)computed. Initially it contains 0 // // 2. First the LA sets of the state's items are computed, starting from the // LA sets of its kernel items. The LA set of each kernel item is // distributed (by distributeLAsetOf) over the items which are implied by // the item being considered. E.g., for item X: a . Y z, where a and z // are any sequence of grammar symbols and X and Y are non-terminal // symbols all of Y's production rules are added as new items to the // current state. // // 3. The member distributeLAsetOfItem(idx) matches the item's rule // specification with the specification a.Bc, where a and c are (possibly // empty) sequences of grammatical symbols, and B is a (possibly empty) // non-terminal symbol appearing immediately to the right of the item's // dot position. if B is empty then there are no additional production // rules and distributeLAsetOf may return. Otherwise, the set b = // FIRST(c) is computed. This set holds all symbols which may follow // B. If b contains e (i.e., the element representing the empty set), // then the currently defined LA set of the item can also be observed. In // that case e is removed, and the currently defined LA set is added to // b. Finally, the LA sets of all items representing a production rule // for B are inspected: if b contains unique elements compared to the LA // sets of these items, then the unique elements of b are added to those // item's LA sets and distributeLAsetOfItem() is recursively called for // those items whose LA sets have been enlarged. // // 4. Once the LA sets of the items of a state have been computed, // inspectTransitions() is called to propagate the LA sets of items from // where transitions to other states are possible to the affected items // of those other (destination) states. The member inspectTransitions() // inspects all Next objects of the current state's d_nextVector. Next // objects hold // - the state index of a state to transfer to from the current state // - a size_t vector of item transitions. Each element is the index of an // item in the current state (the source-item), its index is the index // of a (kernel) item of the state to transfer to (the destination // index). // If the LA set of the destination item is enlarged from the LA set of // the source item then the LA sets of the destination state's items // must be recomputed. This is realized by inserting the destation // state's index into the `todo' set. // // 5. Once the `todo' set is empty all LA sets of all items have been // computed. namespace { size_t zero; } void State::determineLAsets() { set todo(&zero, &zero + 1); // initialize to the first State idx. while (not todo.empty()) { auto iter = todo.begin(); State &state = *s_state[*iter]; // determine LA sets of the items of // this state. todo.erase(iter); state.computeLAsets(); // compute the LA sets of `state's // items state.inspectTransitions(todo); // possibly update the LA sets of // kernel items of states to which a // transition is possible from // `state'. If so, the indices of // those target states are inserted // into `todo', causing the LA sets of // their items to be recomputed. } } bisonc++-4.13.01/state/distributelasetof.cc0000644000175000017500000000514212633316117017437 0ustar frankfrank#include "state.ih" // The LA set of an item tt(`idx') is distributed over the items representing // production rules of the non-terminal to the right of item idx's dot // position, using the following algorithm: // distributeLAsetOfItem(idx): // the item's production rule specification is matched with the // specification a.Bc, where a and c are (possibly empty) sequences of // grammatical symbols, and B is a (possibly empty) non-terminal // symbol appearing immediately to the right of the item's dot // position. // // if B is empty return // // compute the set b = FIRST(c). // // If b contains e (i.e., the element representing the empty set), // then add the item's current LA set to b. // // for each item `itm' containing a production rule for B // { // if b has unique elements compared to item itm's LA set // { // add the unique elements of b to item itm's LA set // distributeLAsetOfItem(itm). // } // } void State::distributeLAsetOf(StateItem &stateItem) { Item const &item = stateItem.item(); Symbol const *beyondDot = item.beyondDotIsNonTerminal(); if (not beyondDot) // no additional rules if the item return; // byond the dot is not a // non-terminal symbol LookaheadSet candidate; // the candidate LA set for items // representing rules of symbol // beyondDot // FIRST(c) of rule a.Bc // if true, FIRST(c) contained // epsilon and so also receives // current LA-set if (item.firstBeyondDot(&candidate.firstSet())) candidate += stateItem.lookaheadSet(); for (StateItem &stItem: d_itemVector) // inspect all STATEitems of this { // state if ( stItem.lhs() == beyondDot // if item is a productionrule of && // B (= beyondDot) stItem.enlargeLA(candidate) // and unique elements of ) // candidate could be added distributeLAsetOf(stItem); // then distribute the updated LA } // set of that state-item. } bisonc++-4.13.01/state/insertext.cc0000644000175000017500000000174112633316117015731 0ustar frankfrank#include "state.ih" ostream &State::insertExt(ostream &out) const { out << "State " << d_idx << ":\n"; // set the ways the insertions must be done Terminal::inserter(&Terminal::plainName); NonTerminal::inserter(&NonTerminal::nameAndFirstset); Item::inserter(&Item::pNrDotItem); StateItem::inserter(&StateItem::itemContext); Next::inserter(&Next::transitionKernel); // display the items for (size_t idx = 0; idx != d_itemVector.size(); ++idx) out << idx << ": " << d_itemVector[idx] << '\n'; // Next elements for (size_t idx = 0; idx != d_nextVector.size(); ++idx) out << " " << idx << d_nextVector[idx] << '\n'; if (d_reducible.size()) { out << " Reduce item(s): "; copy(d_reducible.begin(), d_reducible.end(), ostream_iterator(out, " ")); out << '\n'; } return out << d_srConflict << d_rrConflict << '\n'; } bisonc++-4.13.01/state/addkernelitem.cc0000644000175000017500000000021412633316117016506 0ustar frankfrank#include "state.ih" void State::addKernelItem(StateItem const &stateItem) { d_itemVector.push_back(stateItem); ++d_nKernelItems; } bisonc++-4.13.01/state/haskernel.cc0000644000175000017500000000121712633316117015656 0ustar frankfrank#include "state.ih" // return true if `state' contains all items stored in `searchKernel' bool State::hasKernel(Item::Vector const &searchKernel) const { return d_nKernelItems == searchKernel.size() && searchKernel.size() == static_cast( count_if ( searchKernel.begin(), searchKernel.end(), [&](Item const &searchItem) { return StateItem::containsKernelItem(searchItem, d_nKernelItems, d_itemVector); } ) ); } bisonc++-4.13.01/state/frame0000644000175000017500000000005112633316117014403 0ustar frankfrank#include "state.ih" State::() const { } bisonc++-4.13.01/state/state.ih0000644000175000017500000000136512633316117015041 0ustar frankfrank#include "state.h" #include #include #include #include #include #include #include #include #include #include "../rules/rules.h" #include "../nonterminal/nonterminal.h" #include "../production/production.h" #include "../item/item.h" #include "../lookaheadset/lookaheadset.h" #include "../stateitem/stateitem.h" using namespace std; using namespace FBB; inline void State::addDependents(Next const &next, Symbol const *symbol, size_t idx) { d_itemVector[idx].setNext(Next::addToKernel(d_nextVector, symbol, idx)); } inline std::ostream &State::skipInsertion(std::ostream &out) const { return out; } bisonc++-4.13.01/state/nexton.cc0000644000175000017500000000027512633316117015220 0ustar frankfrank#include "state.ih" size_t State::nextOn(Symbol const *symbol) const { Next::ConstIter iter = nextFind(symbol); return iter == d_nextVector.end() ? string::npos : iter->next(); } bisonc++-4.13.01/state/newstate.cc0000644000175000017500000000021312633316117015527 0ustar frankfrank#include "state.ih" State &State::newState() { State *ret = new State(s_state.size()); s_state.push_back(ret); return *ret; } bisonc++-4.13.01/state/state.h0000644000175000017500000001671512633316117014675 0ustar frankfrank#ifndef _INCLUDED_STATE_H_ #define _INCLUDED_STATE_H_ #include #include #include #include "../statetype/statetype.h" #include "../next/next.h" #include "../srconflict/srconflict.h" #include "../rrconflict/rrconflict.h" class Item; class Production; class StateItem; class Rules; class State { typedef std::vector Vector; public: typedef Vector::const_iterator ConstIter; private: friend std::ostream &operator<<(std::ostream &out, State const *state); StateItem::Vector d_itemVector; size_t d_nKernelItems; std::vector d_reducible; // d_itemVector offsets containing // reducible items size_t d_nTransitions; // elements in d_nextVector minus // removed elements because of // conflicts size_t d_nReductions; // elements in d_reducible minus // reductions having empty LA-sets size_t d_defaultReducible; // the d_reducible index of // the reduction to use as default // (or npos) size_t d_maxLAsize; // the default reduction becomes // the one having the largest // LAset size size_t d_summedLAsize; // sum of all lookaheadsets of all // non-default reductions. Next::Vector d_nextVector; // Vector of Next elements // describing where to transit to // next. size_t d_idx; // index of this state in the // vector of States size_t d_nTerminalTransitions; SRConflict d_srConflict; RRConflict d_rrConflict; StateType d_stateType; static Vector s_state; static State *s_acceptState; static std::ostream &(State::*s_insert)(std::ostream &out) const; public: bool isAcceptState() const; bool nextContains(Next::ConstIter *iter, Symbol const *symbol) const; size_t idx() const; size_t nextOn(Symbol const *token) const; // All reduction members operate with indices in d_reducible, // so *not* with d_stateItem indices size_t defaultReduction() const; // def. reduction idx or npos StateItem const *reduction(size_t idx) const; // 0 or reduction item size_t reductions() const; // number of reductions size_t reductionsLAsize() const; // summed LA set sizes of all // non-default reductions Symbol const *nextTerminal(size_t *idx) const; // Next terminal at // d_next[*idx], 0 if none size_t terminalTransitions() const; size_t transitions() const; Next::Vector const &next() const; static size_t nStates(); static void allStates(); // defines all grammar-states and // lookahead sets static void define(Rules const &rules); static ConstIter begin(); // iterator to the first State * static ConstIter end(); // and beyond the last int type() const; // StateType accessor private: State(size_t idx); void addDependents(Next const &next, Symbol const *symbol, size_t itemIdx); // from notreducible from setitems: determine all // dependent state items and X-link d_itemVector // and d_nextVector void addKernelItem(StateItem const &stateItem); void addNext(Symbol const *symbol, size_t idx); // from notreducible from setitems: // add a new Next element to d_nextVector void addState(Item::Vector const &kernel); void construct(); // construct a state, and by recursion all other // states as well size_t findKernel(Item::Vector const &kernel) const; void notReducible(size_t idx); // handle items in setItems() that aren't // reducible void setItems(); // fill d_itemVector with this state's items // add the productions of `symbol' to d_itemVector, // make them depend on d_itemVector[idx] void addProductions(Symbol const *symbol, size_t idx); Next::ConstIter nextFind(Symbol const *symbol) const; std::ostream &insertStd(std::ostream &out) const; std::ostream &insertExt(std::ostream &out) const; std::ostream &skipInsertion(std::ostream &out) const; static void initialState(); static State &newState(); void nextState(Next &next); bool hasKernel(Item::Vector const &kernel) const; void checkConflicts(); void summarizeActions(); void showSRConflicts(Rules const &rules) const; void showRRConflicts(Rules const &rules) const; static void determineLAsets(); void computeLAsets(); void distributeLAsetOf(StateItem &item); void inspectTransitions(std::set &todo); }; inline int State::type() const { return d_stateType.type(); } inline size_t State::idx() const { return d_idx; } inline size_t State::nStates() { return s_state.size(); } inline size_t State::terminalTransitions() const { return d_nTerminalTransitions; } inline size_t State::transitions() const { return d_nTransitions; } inline StateItem const *State::reduction(size_t idx) const { return idx >= d_reducible.size() ? 0 : &d_itemVector[d_reducible[idx]]; } inline size_t State::defaultReduction() const { return d_defaultReducible; } inline size_t State::reductions() const { return d_nReductions; } inline size_t State::reductionsLAsize() const { return d_summedLAsize; } inline State::ConstIter State::begin() { return s_state.begin(); } inline State::ConstIter State::end() { return s_state.end(); } inline Next::Vector const &State::next() const { return d_nextVector; } inline bool State::isAcceptState() const { return this == s_acceptState; } inline bool State::nextContains(Next::ConstIter *iter, Symbol const *symbol) const { return (*iter = nextFind(symbol)) != d_nextVector.end(); } inline void State::showSRConflicts(Rules const &rules) const { d_srConflict.showConflicts(rules); } inline void State::showRRConflicts(Rules const &rules) const { d_rrConflict.showConflicts(rules); } inline std::ostream &operator<<(std::ostream &out, State const *state) { return (state->*State::s_insert)(out); // One of: insertStd, insertExt or skipInsertion. } #endif bisonc++-4.13.01/state/notreducible.cc0000644000175000017500000000105312633316117016357 0ustar frankfrank#include "state.ih" void State::notReducible(size_t idx) // idx: item index in d_itemVector { Symbol const *symbol = d_itemVector[idx].symbolAtDot(); symbol->used(); // For the showused bookkeeping if (symbol == Rules::errorTerminal()) d_stateType.setType(StateType::ERR_ITEM); Next::ConstIter next; if (nextContains(&next, symbol)) // the symbol is in d_nextVector addDependents(*next, symbol, idx); else // symbol not yet in d_nextVector addNext(symbol, idx); } bisonc++-4.13.01/state/checkconflicts.cc0000644000175000017500000000252712633316117016671 0ustar frankfrank#include "state.ih" // If there are reducible items, SR or RR conflicts may be observed. // To check for SR conflicts, each reducible item index together with the // context (consisting of the state's d_itemVector vector and d_nextVector // vector) is passed to Next's checkShiftReduceConflict member which solves // the observed shift-reduce conflicts. // called by: State::define() void State::checkConflicts() { d_nTransitions = d_nextVector.size(); if (d_reducible.empty()) // no reductions, no conflicts return; d_srConflict.inspect(); // detect SR conflicts // Number of viable transitions: // reduced by the number of removed // shifts // size_t nremoved = d_srConflict.removeShifts(d_nextVector); // cerr << d_nTransitions << ' ' << nremoved << endl; // d_nTransitions -= nremoved; //d_srConflict.removeShifts(d_nextVector); d_nTransitions -= d_srConflict.removeShifts(d_nextVector); d_srConflict.removeReductions(d_itemVector); d_rrConflict.inspect(); // detect RR conflicts d_rrConflict.removeConflicts(d_itemVector); if (d_reducible.size() > 1) // more than 1 reduction d_stateType.setType(StateType::REQ_TOKEN); } bisonc++-4.13.01/state/findkernel.cc0000644000175000017500000000074012633316117016023 0ustar frankfrank#include "state.ih" // return the index of a state having all items listed in `searchKernel'. If // no such state is found, return s_state's size. size_t State::findKernel(Item::Vector const &searchKernel) const { return find_if ( s_state.begin(), s_state.end(), [&](State const *state) { return state->hasKernel(searchKernel); } ) - s_state.begin(); } bisonc++-4.13.01/state/define.cc0000644000175000017500000001631212633316117015136 0ustar frankfrank#include "state.ih" // Defining states proceeds like this: // // 0. The initial state is constructed. It contains the augmented grammar's // production rule. This part is realized by the static member // // initialState(); // // The LA set of the kernel item of state 0 (the item representing the // augmented grammar's production rule `S_$: . S') is by definition equal // to $, representing EOF. This LA set is also initialized by // initialState(). // // 1. Starting from the state's kernel item(s) all implied rules are added as // additional (non-kernel) state items. This results in a vector of // (kernel/non-kernel) items, as well as per item the numbers of the items // that are affected by this item. This information is used later to // compute the LA sets of the items. A set's items are determined from its // kernel items by the member // // setItems() // // This member fills the StateItem::Vector vector. A StateItem contains // // 1. an item (production rule, dot position, LA set) // 2. a size_t vector of `dependent' items, indicating which items // have LA sets that depend on the current item. // 3. The size_t field `d_nextIdx' holds the index in d_nextVector, // allowing quick access of the d_nextVector element defining the // state having the current item as its kernel. a next of npos // indicates that the item does not belong to a next-kernel. // // E.g., // // StateItem: // ------------------------------------- // item LA-set dependent next // stateitems state // ------------------------------------- // S* -> . S, EOF, (1, 2) 0 // ... // ------------------------------------- // // Also, the d_nextVector vector is filled. // // A Next element contains // // 0. The symbol on which the transition takes place // 1. The number of the next state // 2. A StateItem::Vector object (size_t values) holding indices of // items in the current state. The elements's indices are the indices of // (kernel) items of the state to transfer to (the destination index). // E.g., // // Next: // ------------------------------- // On next next kernel // Symbol state from items // ------------------------------- // S ? (0, 1) // ... // ------------------------------- // // Empty production rules don't require special handling as they won't // appear in the Next table, since there's no transition on them. // // From these facilities all states are now constructed. LA sets are // computed following the state construction by the member // determineLAsets. // // 2. Then, from the Next::Vector constructed at (1) new states // are constructed. This is realized by the member // // nextState() // // which is called for each of the elements of d_nextVector. States are // only constructed once. // // 3. New states receive their kernel items from the item(s) of the current // state from where a transition is possible. A new state is constructed // by addState, receiving the just constructed set of kernel items from // nextState. // // 4. All states are eventually constructed by the loop, shown below, which // ends once the idx loop control variable has reached s_state.size(). // // 5. Once all states have been constructed, the LA sets of the items of all // states are computed by determineLAsets(). See determinelasets.cc for a // discription of the implemented algorithm. // // 6. Once all states have been constructed and LA sets have been determined, // conflicts may be located and solved. If the state contains any conflict, // they are resolved and information about these conflicts is stored in an // SRConflict::Vector and/or RRConflict::Vector. Conflicts are identified // and resolved by the member (static)checkConflicts(); See // README.states-and-conflicts for a description of the actions taken by // checkConflicts(). void State::define(Rules const &rules) { Arg &arg = Arg::instance(); s_insert = arg.option(0, "construction") ? &State::insertExt : arg.option('V') ? &State::insertStd : &State::skipInsertion; initialState(); // construct the initial state // containing the augmented grammar's // StateItem size_t idx = 0; do s_state[idx]->construct(); // define all states (starting at state 0) while (++idx != s_state.size()); // State 0's initial LA set is already set in initialSate. s_state[0]->determineLAsets(); // Set the accept-state: // // The accept state is found as the state to transit to on the symbol // of the first item of the first (0-th) state. // E.g., from S* -> . S it is the state to go to on S. // It is found from state[0]'s itemVector's zeroth element. Its next() // member returns the index in state[0]'s nextVector holding the // transition information of (e.g.) symbol S. So, that nextVector's // element's member next() tells us the accept state index. s_acceptState = s_state[ s_state[0]->d_nextVector[ s_state[0]->d_itemVector[0].next() // Next from 1st item ].next() // next state from there ]; // state pointer itself // The rule start_$ -> start . is a spurious reduction. In fact no // such reduction may occur, since at that point EOF is obtained and // parsing should stop. Therefore, this reduction is removed. s_acceptState->d_reducible.erase(s_acceptState->d_reducible.begin()); // REQ_TOKEN is the state's type because it terminates at EOF, and // the EOF transition isn't interpreted as a terminal transition. // Other (terminal) transitions are possible too, so in this case // a token is required anyway. Alternatively, keep NORMAL, and when // reaching this state and its type is NORMAL: ACCEPT. Pondering... s_acceptState->d_stateType.setType(StateType::REQ_TOKEN); for (auto state: s_state) state->checkConflicts(); if ( SRConflict::nConflicts() + RRConflict::nConflicts() != Rules::expectedConflicts() ) { if (SRConflict::nConflicts()) { wmsg << SRConflict::nConflicts() << " Shift/Reduce conflict(s)\n"; for (auto state: s_state) state->showSRConflicts(rules); // inserts into wmsg wmsg << flush; } if (RRConflict::nConflicts()) { wmsg << RRConflict::nConflicts() << " Reduce/Reduce conflict(s)" << '\n'; for (auto state: s_state) state->showRRConflicts(rules); // inserts into wmsg wmsg << flush; } } for (auto state: s_state) state->summarizeActions(); } bisonc++-4.13.01/stateitem/0000755000175000017500000000000012633316117014251 5ustar frankfrankbisonc++-4.13.01/stateitem/enlargela.cc0000644000175000017500000000035312633316117016513 0ustar frankfrank#include "stateitem.ih" bool StateItem::enlargeLA(LookaheadSet const &parentLA) { if (d_LA >= parentLA) // no additions needed return false; d_LA += parentLA; // enlarge the LA set return true; } bisonc++-4.13.01/stateitem/stateitem.h0000644000175000017500000001151112633316117016420 0ustar frankfrank#ifndef _INCLUDED_STATEITEM_ #define _INCLUDED_STATEITEM_ #include #include #include "../symbol/symbol.h" #include "../item/item.h" #include "../lookaheadset/lookaheadset.h" #include "../rmshift/rmshift.h" #include "../rmreduction/rmreduction.h" #include "../rrdata/rrdata.h" // See also README.states-and-conflicts for a description of StateItem. // // A StateItem represents an item in one of the grammar's states. The // information of a StateItem is displayed when --construction is used and is // of a form like // 0: [P1 1] S -> C . C { } 0 class StateItem { friend std::ostream &operator<<(std::ostream &out, StateItem const &stateItem); Item d_item; // The item LookaheadSet d_LA; // its Lookahead set size_t d_nextIdx; // offset in a Next array defining the // next state (initialized to npos by // default) static std::ostream &(StateItem::*s_insertPtr)(std::ostream &out) const; public: typedef std::vector Vector; typedef Vector::const_iterator ConstIter; StateItem(); StateItem(Item const &item); void setLA(LookaheadSet const &laSet); bool enlargeLA(LookaheadSet const &parentLA); size_t next() const; bool isReducible() const; Symbol const *symbolAtDot() const; Item const &item() const; LookaheadSet const &lookaheadSet() const; size_t lookaheadSetSize() const; Production const *production() const; void setNext(size_t next); Symbol const *precedence() const; // a Terminal size_t nr() const; // the item's production number Symbol const *lhs() const; // the lhs symbol of the // production rule of this item. static void addProduction(Production const *production, StateItem::Vector &stateItem, size_t idx); static bool containsKernelItem(Item const &item, size_t nKernelItems, Vector const &vector); static bool lookaheadContains(StateItem const &stateItem, Symbol const *symbol); static void removeReduction(RmReduction const &rm, Vector &itemVector); static void removeRRConflict(RRData const &rm, Vector &itemVector); static void inserter(std::ostream &(StateItem::*insertPtr) (std::ostream &out) const); std::ostream &plainItem(std::ostream &out) const; std::ostream &itemContext(std::ostream &out) const; }; inline void StateItem::setLA(LookaheadSet const &laSet) { d_LA = laSet; } inline void StateItem::addProduction(Production const *production, StateItem::Vector &stateItem, size_t idx) { stateItem.push_back(StateItem(Item(production))); } inline LookaheadSet const &StateItem::lookaheadSet() const { return d_LA; } inline size_t StateItem::lookaheadSetSize() const { return d_LA.fullSize(); } inline bool StateItem::isReducible() const { return d_item.isReducible(); } inline Symbol const *StateItem::symbolAtDot() const { return d_item.dotSymbolOr0(); } inline Symbol const *StateItem::lhs() const { return d_item.lhs(); } inline void StateItem::setNext(size_t next) { d_nextIdx = next; } inline size_t StateItem::next() const { return d_nextIdx; } inline Item const &StateItem::item() const { return d_item; } inline Symbol const *StateItem::precedence() const { return d_item.production()->precedence(); } inline size_t StateItem::nr() const { return d_item.production()->nr(); } inline void StateItem::removeRRConflict(RRData const &rm, Vector &itemVector) { itemVector[rm.reduceIdx()].d_LA -= rm.lookaheadSet(); } inline void StateItem::removeReduction(RmReduction const &rm, Vector &itemVector) { itemVector[rm.idx()].d_LA -= rm.symbol(); } inline Production const *StateItem::production() const { return d_item.production(); } inline bool StateItem::lookaheadContains(StateItem const &stateItem, Symbol const *symbol) { return stateItem.d_LA >= symbol; } inline void StateItem::inserter(std::ostream &(StateItem::*insertPtr) (std::ostream &out) const) { s_insertPtr = insertPtr; } inline std::ostream &operator<<(std::ostream &out, StateItem const &stateItem) { return (stateItem.*StateItem::s_insertPtr)(out); } #endif bisonc++-4.13.01/stateitem/containskernelitem.cc0000644000175000017500000000046612633316117020464 0ustar frankfrank#include "stateitem.ih" bool StateItem::containsKernelItem(Item const &searchItem, size_t nKernelItems, Vector const &vector) { for (size_t idx = 0; idx != nKernelItems; ++idx) if (searchItem == vector[idx].d_item) return true; return false; } bisonc++-4.13.01/stateitem/itemcontext.cc0000644000175000017500000000037212633316117017125 0ustar frankfrank#include "stateitem.ih" // Produces: // item - LA - next-index ostream &StateItem::itemContext(ostream &out) const { return out << d_item << " " << d_LA << " " << static_cast(d_nextIdx); } bisonc++-4.13.01/stateitem/data.cc0000644000175000017500000000025512633316117015473 0ustar frankfrank#include "stateitem.ih" ostream &(StateItem::*StateItem::s_insertPtr)(ostream &out) const = &StateItem::plainItem; bisonc++-4.13.01/stateitem/stateitem2.cc0000644000175000017500000000016312633316117016641 0ustar frankfrank#include "stateitem.ih" StateItem::StateItem(Item const &item) : d_item(item), d_nextIdx(string::npos) {} bisonc++-4.13.01/stateitem/plainitem.cc0000644000175000017500000000015212633316117016540 0ustar frankfrank#include "stateitem.ih" ostream &StateItem::plainItem(ostream &out) const { return out << d_item; } bisonc++-4.13.01/stateitem/frame0000644000175000017500000000006112633316117015263 0ustar frankfrank#include "stateitem.ih" StateItem::() const { } bisonc++-4.13.01/stateitem/stateitem1.cc0000644000175000017500000000012112633316117016632 0ustar frankfrank#include "stateitem.ih" StateItem::StateItem() : d_nextIdx(string::npos) {} bisonc++-4.13.01/stateitem/stateitem.ih0000644000175000017500000000012612633316117016571 0ustar frankfrank#include "stateitem.h" #include #include using namespace std; bisonc++-4.13.01/statetype/0000755000175000017500000000000012633316117014274 5ustar frankfrankbisonc++-4.13.01/statetype/statetype.h0000644000175000017500000000213312633316117016466 0ustar frankfrank#ifndef _INCLUDED_STATETYPE_ #define _INCLUDED_STATETYPE_ class StateType { public: enum Type // modify data.cc when this enum changes { NORMAL = 0, ERR_ITEM = 1, REQ_TOKEN = 2, // terminal shifts and multiple reductions DEF_RED = 4 // state has default reduction }; // Combinations may occur. private: int d_type; // the type of a state static int const s_mask = 7; // mask for all legal Type values static char const *s_stateName[]; // array of all state type names public: StateType(int type); int type() const; void setType(Type type); static char const *typeName(int type); }; inline StateType::StateType(int type) : d_type(type) {} inline int StateType::type() const { return d_type; } inline void StateType::setType(Type type) { d_type |= type; } inline char const *StateType::typeName(int type) { return s_stateName[type & s_mask]; } #endif bisonc++-4.13.01/statetype/data.cc0000644000175000017500000000104512633316117015514 0ustar frankfrank#include "statetype.ih" char const *StateType::s_stateName[] = { "NORMAL", // 0 "ERR_ITEM", // 1: state has an error item "REQ_TOKEN", // 2: needs token at terminal shifts and // multiple reductions "ERR_REQ", // 3 "DEF_RED", // 4: state has a default reduction "ERR_DEF", // 5: 1 | 4 "REQ_DEF", // 6: 2 | 4 "ERR_REQ_DEF" // 7: 1 | 2 |4 }; bisonc++-4.13.01/statetype/frame0000644000175000017500000000006112633316117015306 0ustar frankfrank#include "statetype.ih" StateType::() const { } bisonc++-4.13.01/statetype/statetype.ih0000644000175000017500000000005612633316117016641 0ustar frankfrank#include "statetype.h" using namespace std; bisonc++-4.13.01/symbol/0000755000175000017500000000000012633316117013557 5ustar frankfrankbisonc++-4.13.01/symbol/destructor.cc0000644000175000017500000000005412633316117016263 0ustar frankfrank#include "symbol.ih" Symbol::~Symbol() { } bisonc++-4.13.01/symbol/symbol.h0000644000175000017500000000633312633316117015242 0ustar frankfrank#ifndef _INCLUDED_SYMBOL_ #define _INCLUDED_SYMBOL_ #include #include #include "../element/element.h" #include "../firstset/firstset.h" class Symbol: public Element { public: typedef std::vector Vector; enum Type { UNDETERMINED = 0, CHAR_TERMINAL = 1, SYMBOLIC_TERMINAL = 2, NON_TERMINAL = 4, RESERVED = 8, }; private: std::string const d_name; // Name of the symbol, assigned at // construction time, returned by name() std::string d_stype; // Type assigned by explicit symbol type // association, e.g. %type symbol // but there's also a type association with // symbols when no %union is specified. In // that case it's either the default (int) or // %stype-defined type. int d_type; // type of the symbol using the enum values of // the Type enum. Type's values are bit // values, and multiple values may therefore // be assigned. mutable bool d_used; // Set to true once the symbol has actually // been used in the grammar. public: ~Symbol(); bool isNonTerminal() const; bool isReserved() const; bool isSymbolic() const; bool isTerminal() const; bool isUndetermined() const; bool isUsed() const; std::string const &sType() const; std::string const &name() const; FirstSet const &firstSet() const; void setReserved(); void setStype(std::string const &stype); void setType(Type type); void used() const; // d_used is mutable.; protected: Symbol(std::string const &name, Type t, std::string const &type = ""); private: virtual FirstSet const &v_firstSet() const = 0; }; inline std::string const &Symbol::name() const { return d_name; } inline bool Symbol::isTerminal() const { return !(d_type & NON_TERMINAL); } inline bool Symbol::isSymbolic() const { return d_type & SYMBOLIC_TERMINAL; } inline bool Symbol::isNonTerminal() const { return d_type & NON_TERMINAL; } inline bool Symbol::isUndetermined() const { return d_type == UNDETERMINED; } inline bool Symbol::isUsed() const { return d_used; } inline bool Symbol::isReserved() const { return d_type & RESERVED; } inline void Symbol::setReserved() { d_type |= RESERVED; } inline void Symbol::used() const // d_used is mutable. { d_used = true; } inline void Symbol::setType(Type type) { d_type = type; } inline void Symbol::setStype(std::string const &stype) { d_stype = stype; } inline std::string const &Symbol::sType() const { return d_stype; } inline FirstSet const &Symbol::firstSet() const { return v_firstSet(); } // operator<< is already available through Element #endif bisonc++-4.13.01/symbol/frame0000644000175000017500000000005312633316117014572 0ustar frankfrank#include "symbol.ih" Symbol::() const { } bisonc++-4.13.01/symbol/symbol1.cc0000644000175000017500000000025012633316117015451 0ustar frankfrank#include "symbol.ih" Symbol::Symbol(string const &name, Type type, string const &stype) : d_name(name), d_stype(stype), d_type(type), d_used(false) {} bisonc++-4.13.01/symbol/symbol.ih0000644000175000017500000000005312633316117015404 0ustar frankfrank#include "symbol.h" using namespace std; bisonc++-4.13.01/symtab/0000755000175000017500000000000012633316117013551 5ustar frankfrankbisonc++-4.13.01/symtab/symtab.ih0000644000175000017500000000005312633316117015370 0ustar frankfrank#include "symtab.h" using namespace std; bisonc++-4.13.01/symtab/symtab.h0000644000175000017500000000156612633316117015231 0ustar frankfrank#ifndef _INCLUDED_SYMTAB_ #define _INCLUDED_SYMTAB_ #include #include #include "../symbol/symbol.h" #include "../terminal/terminal.h" #include "../nonterminal/nonterminal.h" // The symbol table holds the information about all symbols. It can be // queried to inspect whether it already contains an element, and it can // store an element. Elements are strings, produced by the reader, // representing terminal or non-terminal symbols. class Symtab: private std::unordered_map { typedef std::unordered_map Base; public: typedef Base::value_type value_type; Symbol *lookup(std::string const &symbol); // req'd for STL using Base::insert; // only map-members used using Base::find; using Base::erase; }; #endif bisonc++-4.13.01/symtab/frame0000644000175000017500000000005312633316117014564 0ustar frankfrank#include "symtab.ih" Symtab::() const { } bisonc++-4.13.01/symtab/lookup.cc0000644000175000017500000000022612633316117015371 0ustar frankfrank#include "symtab.ih" Symbol *Symtab::lookup(std::string const &symbol) { iterator it = find(symbol); return it == end() ? 0 : it->second; } bisonc++-4.13.01/terminal/0000755000175000017500000000000012633316117014065 5ustar frankfrankbisonc++-4.13.01/terminal/setunique.cc0000644000175000017500000000031712633316117016417 0ustar frankfrank#include "terminal.ih" bool Terminal::setUnique(size_t value) { if (s_valueSet.count(value)) return false; // value already used s_valueSet.insert(value); return true; } bisonc++-4.13.01/terminal/destructor.cc0000644000175000017500000000006212633316117016570 0ustar frankfrank#include "terminal.ih" Terminal::~Terminal() { } bisonc++-4.13.01/terminal/setvalue.cc0000644000175000017500000000046212633316117016226 0ustar frankfrank#include "terminal.ih" void Terminal::setValue(size_t value) { if (!setUnique(value)) emsg << "Value " << value << " of token " << this << " multiply assigned " << endl; s_valueSet.erase(d_value); s_value = value + 1; d_value = value; } bisonc++-4.13.01/terminal/nameorvalue.cc0000644000175000017500000000026512633316117016715 0ustar frankfrank#include "terminal.ih" ostream &Terminal::nameOrValue(ostream &out) const { if (isReserved()) return out << d_readableLiteral; return out << setw(3) << value(); } bisonc++-4.13.01/terminal/unused.cc0000644000175000017500000000055512633316117015704 0ustar frankfrank#include "terminal.ih" void Terminal::unused(Terminal const *terminal) { static bool header = false; if (!terminal->isUsed()) { if (!header) { Global::plainWarnings(); wmsg << "Terminal symbol(s) not used in productions:" << endl; header = true; } wmsg << terminal << endl; } } bisonc++-4.13.01/terminal/terminal.ih0000644000175000017500000000030712633316117016222 0ustar frankfrank#include "terminal.h" #include #include #include #include namespace Global { void plainWarnings(); } using namespace std; using namespace FBB; bisonc++-4.13.01/terminal/valuequotedname.cc0000644000175000017500000000035312633316117017574 0ustar frankfrank#include "terminal.ih" std::ostream &Terminal::valueQuotedName(std::ostream &out) const { if (isReserved()) return out << " " << name(); return out << " " << setw(3) << value() << ": " << d_readableLiteral; } bisonc++-4.13.01/terminal/quotedname.cc0000644000175000017500000000045512633316117016542 0ustar frankfrank#include "terminal.ih" namespace { char const *quotes[][2] = { {"`", ""}, {"'", ""} }; } ostream &Terminal::quotedName(ostream &out) const { size_t qIdx = (d_readableLiteral[0] == '\''); return out << quotes[0][qIdx] << d_readableLiteral << quotes[1][qIdx]; } bisonc++-4.13.01/terminal/compareprecedence.cc0000644000175000017500000000111212633316117020033 0ustar frankfrank#include "terminal.ih" Terminal::Precedence Terminal::comparePrecedence(Symbol const *firstSymbol, Symbol const *secondSymbol) { Terminal const *first = downcast(firstSymbol); Terminal const *second = downcast(secondSymbol); size_t firstPrecedence = first ? first->d_precedence : 0; size_t secondPrecedence = second ? second->d_precedence : 0; return firstPrecedence > secondPrecedence ? LARGER : firstPrecedence < secondPrecedence ? SMALLER : EQUAL; } bisonc++-4.13.01/terminal/data.cc0000644000175000017500000000103112633316117015300 0ustar frankfrank#include "terminal.ih" size_t Terminal::s_precedence; set Terminal::s_valueSet; size_t Terminal::s_value = Terminal::INITIAL_SYMBOLIC_VALUE; size_t Terminal::s_maxValue; char const *Terminal::s_association[] = { "", // UNDEFINED, "non-associative", // NONASSOC, "left associative", // LEFT, "right associative", // RIGHT }; ostream &(Terminal::*Terminal::s_insertPtr)(ostream &out) const = &Terminal::plainName; bisonc++-4.13.01/terminal/frame0000644000175000017500000000005712633316117015104 0ustar frankfrank#include "terminal.ih" Terminal::() const { } bisonc++-4.13.01/terminal/terminal2.cc0000644000175000017500000000070512633316117016273 0ustar frankfrank#include "terminal.ih" #include // reserved terminals are defined by default in the share/bisonc++ skeleton, // and therefore do not need assigned values. Terminal::Terminal(string const &name, string const &literal, Type type) : Symbol(name, type, ""), d_value(0), d_association(UNDEFINED), d_precedence(s_precedence), d_literal(literal), d_readableLiteral(literal), d_firstSet(this) { setReserved(); } bisonc++-4.13.01/terminal/terminal.h0000644000175000017500000001731512633316117016060 0ustar frankfrank#ifndef _INCLUDED_TERMINAL_ #define _INCLUDED_TERMINAL_ #include #include #include #include #include "../symbol/symbol.h" class Terminal: public Symbol { public: typedef std::vector Vector; typedef std::vector ConstVector; typedef ConstVector::const_iterator ConstIter; enum { INITIAL_SYMBOLIC_VALUE = 257, // See rules/data.cc for Terminals // defined by default. DEFAULT = std::numeric_limits::max() // results in the next symbolic // terminal value to be assigned. }; enum Association // adapt s_association[] in data.cc { // when this changes UNDEFINED, NONASSOC, LEFT, RIGHT, }; enum Precedence // returned by comparePrecedence() { SMALLER = -1, EQUAL = 0, LARGER = 1, }; private: size_t d_value; // value assigned to the symbol Association d_association; // association type of the symbol size_t d_precedence; // precedence value of the symbol std::string d_literal; // literal text value of a // symbol. std::string d_readableLiteral; // with character terminals this // contains a quoted character, maybe // escaped as in '\n' FirstSet d_firstSet; // set of symbols that can be seen at // this terminal symbol. It only // contains the current terminal // symbol, but is returned by // Element::firstSet() through the // virtual v_firstSet() function below static std::set s_valueSet; // all terminal token values static size_t s_precedence; // last-used precedence so far static char const *s_association[]; // array of literal association // names static size_t s_value; // value assigned, unless // explictly defined static size_t s_maxValue; // maximum assigned terminal value static std::ostream &(Terminal::*s_insertPtr)(std::ostream &out) const; // pointer to the insertion function // to use. public: Terminal(std::string const &name, Type type, size_t value = DEFAULT, Association association = UNDEFINED, std::string const &stype = ""); // stype: type assigned by // explicit symbol type association, // e.g. %type symbol Terminal(std::string const &name, // used for reserved terminals std::string const &literal, Type type); virtual ~Terminal(); Association association() const; Precedence comparePrecedence(Terminal const *other) const; size_t precedence() const; void setLiteral(std::string const &literal); void setValue(size_t value); // reassign a token value void setPrecedence(size_t value); static Terminal *downcast(Symbol *sp); static Terminal const *downcast(Element const *sp); // used with Element by Writer static bool compareValues(Terminal const *left, Terminal const *right); static Precedence comparePrecedence(Symbol const *first, Symbol const *second); static void incrementPrecedence(); static void resetPrecedence(); // see Parser::parseDeclarations() static size_t sPrecedence(); static void set_sPrecedence(size_t prec); static bool setUnique(size_t value); // true if unique static void unused(Terminal const *terminal); static size_t maxValue(); static void inserter(std::ostream &(Terminal::*insertPtr) (std::ostream &out) const); // Symbolic as name, chars as // quoted chars std::ostream &plainName(std::ostream &out) const; // Symbolic as quoted names, // chars as char consts std::ostream "edName(std::ostream &out) const; // Value followed by quoted name std::ostream &valueQuotedName(std::ostream &out) const; // Values or names (of reserved tokens) std::ostream &nameOrValue(std::ostream &out) const; private: virtual FirstSet const &v_firstSet() const; virtual size_t v_value() const; virtual std::ostream &insert(std::ostream &out) const; }; inline std::ostream &operator<<(std::ostream &out, std::ostream &(Terminal::*insertPtr)(std::ostream &out) const) { Terminal::inserter(insertPtr); return out; } inline std::ostream &Terminal::plainName(std::ostream &out) const { return out << d_readableLiteral; } inline Terminal *Terminal::downcast(Symbol *sp) { return dynamic_cast(sp); } inline Terminal const *Terminal::downcast(Element const *sp) { return dynamic_cast(sp); } inline void Terminal::incrementPrecedence() { ++s_precedence; } inline void Terminal::resetPrecedence() // see Parser::parseDeclarations() { s_precedence = 0; } inline size_t Terminal::maxValue() { return s_maxValue; } inline Terminal::Association Terminal::association() const { return d_association; } inline Terminal::Precedence Terminal::comparePrecedence( Terminal const *other) const { return d_precedence > other->d_precedence ? LARGER : d_precedence < other->d_precedence ? SMALLER : EQUAL; } inline size_t Terminal::precedence() const { return d_precedence; } inline size_t Terminal::v_value() const { return d_value; } inline FirstSet const &Terminal::v_firstSet() const { return d_firstSet; } inline void Terminal::setLiteral(std::string const &literal) { d_literal = literal; } inline void Terminal::setPrecedence(size_t value) { d_precedence = value; } inline size_t Terminal::sPrecedence() { return s_precedence; } inline void Terminal::set_sPrecedence(size_t prec) { s_precedence = prec; } inline bool Terminal::compareValues(Terminal const *left, Terminal const *right) { return left->d_value < right->d_value; } inline std::ostream &Terminal::insert(std::ostream &out) const { return (this->*Terminal::s_insertPtr)(out); } inline void Terminal::inserter(std::ostream &(Terminal::*insertPtr) (std::ostream &out) const) { s_insertPtr = insertPtr; } // operator<< is already available through Element #endif bisonc++-4.13.01/terminal/terminal1.cc0000644000175000017500000000235412633316117016274 0ustar frankfrank#include "terminal.ih" #include Terminal::Terminal(string const &name, Type type, size_t value, Association association, std::string const &stype) : Symbol(name, type, stype), d_value(value == DEFAULT ? s_value++ : value), d_association(association), d_precedence(s_precedence), d_literal(name), d_readableLiteral(name), d_firstSet(this) { if (name.find("'\\x") == 0) { int charRepresentation; istringstream convert(name.substr(3)); convert >> hex >> charRepresentation; if (isprint(charRepresentation)) { d_readableLiteral = "'"; d_readableLiteral.append(1, static_cast(charRepresentation)); d_readableLiteral += "'"; } } // If the terminal definition is really requested (it isn't shown in bisonc++ // 2.8.0) then pass yylineno from parser/useterminal.cc and // parser/defineterminal.cc to this function so the line can be shown. // // imsg.setLineNr(lineNr); // imsg << "Defining terminal " << d_readableLiteral << ": pri = " << // d_precedence << endl; if (d_value > s_maxValue) s_maxValue = d_value; } bisonc++-4.13.01/usage.cc0000644000175000017500000001412512633316117013670 0ustar frankfrank#include "bisonc++.ih" void usage(string const &program_name) { cout << "\n" << program_name << " by Frank B. Brokken (f.b.brokken@rug.nl)\n" "\n" "LALR(1) Parser Generator V " << version << "\n" "Copyright (c) GPL " << year << ". NO WARRANTY.\n" "Designed after `bison++' (1.21.9-1) by Alain Coetmeur " "\n" "\n" "Usage: " << program_name << " [OPTIONS] file\n" "Where:\n" " [OPTIONS] - zero or more optional arguments (int options between\n" " parentheses. Short options require arguments if their\n" " long option variants do too):\n" " --analyze-only (-A): only analyze the grammar; except for possibly\n" " the verbose grammar description file no files are written.\n" " --baseclass-preinclude=
(-H):\n" " preinclude header in the base-class header file.\n" " Use [header] to include
, otherwise \"header\"\n" " will be included.\n" " --baseclass-header=
(-b):\n" " filename holding the base class definition.\n" " --baseclass-skeleton= (-B):\n" " location of the baseclass header skeleton.\n" " --class-header=
(-c):\n" " filename holding the parser class definition.\n" " --class-skeleton= (-C):\n" " location of the class header skeleton.\n" " --construction: write details about the grammar analysis to stdout.\n" " --debug: generates debug output statements in the generated parse\n" " function's source.\n" " --error-verbose: the parse function will dump the parser's state\n" " stack to stdout when a syntactic error is reported\n" " --filenames= (-f):\n" " filename of output files (overruling the default filename).\n" " --help (-h): produce this information (and terminate).\n" " --implementation-header=
(-i):\n" " filename holding the implementation header.\n" " --implementation-skeleton= (-I):\n" " location of the implementation header skeleton.\n" " --include-only: catenate all grammar files in their order of\n" " processing to the standard output stream and terminate.\n" " --insert-stype: show selected semantic values in the output " "generated\n" " by --debug. Ignored unless --debug was specified.\n" " --max-inclusion-depth=:\n" " sets the maximum number of nested grammar files (default: " "10).\n" " --namespace= (-n):\n" " define the parser in the mentioned namespace.\n" " --no-baseclass-header: don't create the parser's base class header.\n" " --no-decoration (-D): do not include the user-defined actions when\n" " generating the parser's tt(parse) member.\n" " --no-default-action-return (-N): do not use the default $$ = $1\n" " assignment of semantic values when returning from an action\n" " block\n" " --no-lines: don't put #line directives in generated output,\n" " overruling the %lines directive.\n" " --no-parse-member: don't create the member parse().\n" " --own-debug:\n" " bisonc++ displays the actions of its parser while " "processing\n" " its input file(s) (implies --verbose).\n" " --own-tokens (-t):\n" " bisonc++ displays the tokens and their corresponding\n" " matched text it received from its lexcial scanner.\n" " --parser-skeleton= (-P):\n" " location of the parse function's skeleton.\n" " --parsefun-source= (-p):\n" " filename holding the parse function's source.\n" " --polymorphic-inline-skeleton= (-m):\n" " location of the polymorphic inline functions skeleton.\n" " --polymorphic-skeleton= (-M):\n" " location of the polymorphic semantic values skeleton.\n" " --print-tokens (-t):\n" " the print() member of the generated parser class displays\n" " the tokens and their corresponding matched text.\n" " --required-tokens=:\n" " minimum number of successfully processed tokens between\n" " errors (default: 0).\n" " --scanner= (-s):\n" " include `header-file' declaring the class Scanner, and call\n" " d_scanner.yylex() from Parser::lex().\n" " --scanner-class-name=:\n" " specifies the name of the scanner class: this option is\n" " only interpreted if --scanner (or %scanner) is also used.\n" " --scanner-debug: extensive display of the actions of bisonc++'s " "scanner\n" " --scanner-token-function=:\n" " define the function called from lex() returning the next\n" " token returned (by default d_scanner.yylex() when --scanner\n" " is used)\n" " --show-filenames: show the names of the used/generated files on\n" " the standard error stream.\n" " --skeleton-directory= (-S):\n" " location of the skeleton directory.\n" " --thread-safe: no static data are modified, making bisonc++'s\n" " generated code thread-safe.\n" " --usage: produce this information (and terminate).\n" " --verbose (-V):\n" " generate verbose description of the analyzed grammar.\n" " --version (-v):\n" " display " << program_name << "'s version and terminate.\n" << endl; } bisonc++-4.13.01/version.cc0000644000175000017500000000013312634617625014254 0ustar frankfrank#include "bisonc++.ih" #include "VERSION" char version[] = VERSION; char year[] = YEARS; bisonc++-4.13.01/writer/0000755000175000017500000000000012633316117013566 5ustar frankfrankbisonc++-4.13.01/writer/symbolicnames.cc0000644000175000017500000000156212633316117016746 0ustar frankfrank#include "writer.ih" void Writer::symbolicNames() const { *d_out << "typedef std::unordered_map SMap;\n" "typedef SMap::value_type SMapVal;\n" "\n" "SMapVal s_symArr[] =\n" "{\n" // _UNDETERMINED_ is also used in generator/print.cc " SMapVal(-2, \"_UNDETERMINED_\"), // predefined symbols\n" " SMapVal(-1, \"_EOF_\"),\n" " SMapVal(256, \"_error_\"),\n" "\n"; for (auto terminal: d_rules.terminals()) terminalSymbol(terminal, *d_out); for (auto nonTerminal: d_rules.nonTerminals()) nonTerminalSymbol(nonTerminal, *d_out); *d_out << "};\n" "\n" "SMap s_symbol\n" "(\n" " s_symArr, s_symArr + sizeof(s_symArr) / sizeof(SMapVal)\n" ");\n" "\n"; } bisonc++-4.13.01/writer/reductionsymbol.cc0000644000175000017500000000106512633316117017321 0ustar frankfrank#include "writer.ih" void Writer::reductionSymbol(Element const *symb, size_t ruleNr, FBB::Table &table) { Symbol const *symbol = dynamic_cast(symb); ostringstream out; Terminal::inserter(&Terminal::nameOrValue); NonTerminal::inserter(&NonTerminal::value); out << symbol; table << out.str() << -static_cast(ruleNr); out.str(""); Terminal::inserter(&Terminal::plainName); NonTerminal::inserter(&NonTerminal::plainName); out << "// " << symbol; table << out.str(); } bisonc++-4.13.01/writer/insert.cc0000644000175000017500000000055312633316117015404 0ustar frankfrank#include "writer.ih" void Writer::insert(Terminal::ConstVector const &tokens) const { *d_out << "\n" " // Symbolic tokens:\n" " enum Tokens__\n" " {\n"; size_t lastTokenValue = 0; for (auto token: tokens) insertToken(token, lastTokenValue, *d_out); *d_out << " };\n" "\n"; } bisonc++-4.13.01/writer/transition.cc0000644000175000017500000000115212633316117016266 0ustar frankfrank#include "writer.ih" // Writes an SR element: // // { { symbol }, {nextstate} }, // comment void Writer::transition(Next const &next, Table &table) { if (Symbol const *symbol = next.symbol()) { ostringstream out; Terminal::inserter(&Terminal::nameOrValue); NonTerminal::inserter(&NonTerminal::value); out << symbol; table << out.str() << next.next(); out.str(""); Terminal::inserter(&Terminal::plainName); NonTerminal::inserter(&NonTerminal::plainName); out << "// " << symbol; table << out.str(); } } bisonc++-4.13.01/writer/srtables.cc0000644000175000017500000000102412633316117015711 0ustar frankfrank#include "writer.ih" void Writer::srTables() const { *d_out << "\n" "// State info and SR__ transitions for each state.\n" "\n"; TableSupport support; support << " { { " << "}, { " << "} }, "; Table table(support, 3, Table::ROWWISE); table << Align(2, std::left); for_each( State::begin(), State::end(), [&](State const *state) { srTable(state, d_baseclass, table, *d_out); } ); *d_out << '\n'; } bisonc++-4.13.01/writer/nonterminalsymbol.cc0000644000175000017500000000033612633316117017653 0ustar frankfrank#include "writer.ih" void Writer::nonTerminalSymbol(NonTerminal const *nonTerminal, ostream &out) { out << " SMapVal(" << nonTerminal->value() << ", \"" << nonTerminal << "\"),\n"; } bisonc++-4.13.01/writer/reduction.cc0000644000175000017500000000050412633316117016070 0ustar frankfrank#include "writer.ih" void Writer::reduction(Table &table, StateItem const &stateItem) { size_t ruleNr = stateItem.nr(); for (auto sym: stateItem.lookaheadSet()) reductionSymbol(sym, ruleNr, table); if (stateItem.lookaheadSet().hasEOF()) reductionSymbol(Rules::eofTerminal(), ruleNr, table); } bisonc++-4.13.01/writer/terminalsymbol.cc0000644000175000017500000000042212633316117017134 0ustar frankfrank#include "writer.ih" void Writer::terminalSymbol(Terminal const *terminal, ostream &out) { if (terminal->isSymbolic() && !terminal->isReserved()) out << " SMapVal(" << terminal->value() << ", \"" << terminal << "\"),\n"; } bisonc++-4.13.01/writer/data.cc0000644000175000017500000000007612633316117015011 0ustar frankfrank#include "writer.ih" char const *Writer::s_threadConst = ""; bisonc++-4.13.01/writer/reductions.cc0000644000175000017500000000061512633316117016256 0ustar frankfrank#include "writer.ih" void Writer::reductions(Table &table, State const &state) { for ( size_t idx = 0, defaultReduction = state.defaultReduction(), nReductions = state.reductions(); idx != nReductions; ++idx ) { if (idx != defaultReduction) reduction(table, *state.reduction(idx)); } } bisonc++-4.13.01/writer/productioninfo.cc0000644000175000017500000000043512633316117017141 0ustar frankfrank#include "writer.ih" void Writer::productionInfo(Production const *production, ostream &out) { NonTerminal const *np = NonTerminal::downcast(production->lhs()); out << " {" << np->value() << ", " << production->size() << "}, // " << production << '\n'; } bisonc++-4.13.01/writer/writer.h0000644000175000017500000000353512633316117015261 0ustar frankfrank#ifndef _INCLUDED_WRITER_ #define _INCLUDED_WRITER_ #include #include #include #include "../next/next.h" class Terminal; class NonTerminal; class Production; class StateItem; class State; class Rules; class Writer { std::ostream *d_out; std::string const &d_baseclass; Rules const &d_rules; static char const *s_threadConst; public: Writer(std::string const &baseclass, Rules const &rules); void useStream(std::ostream &out); void productions() const; void srTables() const; void statesArray() const; void symbolicNames() const; void insert(Terminal::ConstVector const &tokens) const; private: static void insertToken(Terminal const *token, size_t &lastValue, std::ostream &out); static void terminalSymbol(Terminal const *terminal, std::ostream &out); static void nonTerminalSymbol(NonTerminal const *nonTerminal, std::ostream &out); static void productionInfo(Production const *production, std::ostream &out); static void srTable(State const *state, std::string const &baseclassScope, FBB::Table &table, std::ostream &out); static void transitions(FBB::Table &table, Next::Vector const &next); static void transition(Next const &next, FBB::Table &table); static void reductions(FBB::Table &, State const &state); static void reduction(FBB::Table &, StateItem const &stateItem); static void reductionSymbol(Element const *sym, size_t ruleNr, FBB::Table &table); }; inline void Writer::useStream(std::ostream &out) { d_out = &out; } #endif bisonc++-4.13.01/writer/statesarray.cc0000644000175000017500000000075512633316117016446 0ustar frankfrank#include "writer.ih" void Writer::statesArray() const { size_t const nPerLine = 10; *d_out << "\n" "// State array:\n" "SR__ " << s_threadConst << "*s_state[] =\n" "{\n"; for (size_t idx = 0; idx != State::nStates(); ++idx) *d_out << " s_" << idx << "," << ((idx + 1) % nPerLine == 0 ? "\n" : ""); *d_out << (State::nStates() % nPerLine ? "\n" : "") << "};\n" "\n"; } bisonc++-4.13.01/writer/transitions.cc0000644000175000017500000000023512633316117016452 0ustar frankfrank#include "writer.ih" void Writer::transitions(Table &table, Next::Vector const &next) { for (auto &element: next) transition(element, table); } bisonc++-4.13.01/writer/frame0000644000175000017500000000004312633316117014600 0ustar frankfrank#include "writer.ih" Writer:: { } bisonc++-4.13.01/writer/writer.ih0000644000175000017500000000062112633316117015423 0ustar frankfrank#include "writer.h" #include #include #include #include #include "../symbol/symbol.h" #include "../terminal/terminal.h" #include "../nonterminal/nonterminal.h" #include "../lookaheadset/lookaheadset.h" #include "../stateitem/stateitem.h" #include "../state/state.h" #include "../rules/rules.h" using namespace std; using namespace FBB; bisonc++-4.13.01/writer/writer0.cc0000644000175000017500000000035712633316117015476 0ustar frankfrank#include "writer.ih" Writer::Writer(std::string const &baseclass, Rules const &rules) : d_out(0), d_baseclass(baseclass), d_rules(rules) { if (Arg::instance().option(0, "thread-safe")) s_threadConst = "const "; } bisonc++-4.13.01/writer/productions.cc0000644000175000017500000000067112633316117016452 0ustar frankfrank#include "writer.ih" void Writer::productions() const { Production::ConstVector const &prods = d_rules.productions(); *d_out << "\n" "// Productions Info Records:\n" << "PI__ const s_productionInfo[] = \n" "{\n" " {0, 0}, // not used: reduction values are negative\n"; for (auto production: prods) productionInfo(production, *d_out); *d_out << "};\n"; } bisonc++-4.13.01/writer/srtable.cc0000644000175000017500000000311212633316117015526 0ustar frankfrank#include "writer.ih" void Writer::srTable(State const *sp, std::string const &baseclassScope, FBB::Table &table, std::ostream &out) { bool acceptState = sp->isAcceptState(); StateItem const *defaultReduction = sp->reduction(sp->defaultReduction()); // A token is needed if there are terminaltransitions or multiple // reductions (or for the accept state, set at state/define.cc) bool tokenNeeded = sp->terminalTransitions() || sp->reductions() > 1; int stateType = sp->type(); if (tokenNeeded) stateType |= StateType::REQ_TOKEN; if (defaultReduction) stateType |= StateType::DEF_RED; out << "\n" // Write the table header "SR__ " << s_threadConst << "s_" << sp->idx() << "[] =\n" "{\n"; table.clear(); // 2nd element equals the index of the last array element table << StateType::typeName(stateType) << sp->transitions() + sp->reductionsLAsize() + acceptState + 1 << def; // cerr << (sp->transitions() + sp->reductionsLAsize() + acceptState + 1) << // " = " << sp->transitions() << ' ' << sp->reductionsLAsize() << ' ' << // acceptState << " 1\n"; transitions(table, sp->next()); if (acceptState) table << Rules::eofTerminal() << "PARSE_ACCEPT" << def; reductions(table, *sp); table << 0 << (defaultReduction ? -static_cast(defaultReduction->nr()) : 0) << def; out << table << "};\n"; // table ends in a newline } bisonc++-4.13.01/writer/inserttoken.cc0000644000175000017500000000051412633316117016442 0ustar frankfrank#include "writer.ih" void Writer::insertToken(Terminal const *token, size_t &lastTokenValue, std::ostream &out) { out << " " << token; if (token->value() != ++lastTokenValue) { lastTokenValue = token->value(); out << " = " << lastTokenValue; } out << ",\n"; }