Pegex-0.60/0000755000175000017500000000000012462227077011216 5ustar ingyingyPegex-0.60/META.yml0000644000175000017500000000127712462227077012476 0ustar ingyingy--- abstract: 'Acmeist PEG Parser Framework' author: - 'Ingy döt Net ' build_requires: YAML::XS: '0' configure_requires: ExtUtils::MakeMaker: '0' File::ShareDir::Install: '0.06' dynamic_config: 0 generated_by: 'Dist::Zilla version 5.029, CPAN::Meta::Converter version 2.143240' license: perl meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: '1.4' name: Pegex no_index: directory: - inc - t - xt - example requires: perl: v5.8.1 resources: bugtracker: https://github.com/ingydotnet/pegex-pm/issues homepage: https://github.com/ingydotnet/pegex-pm repository: https://github.com/ingydotnet/pegex-pm.git version: '0.60' Pegex-0.60/LICENSE0000644000175000017500000004365612462227077012241 0ustar ingyingyThis software is copyright (c) 2015 by Ingy döt Net. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. Terms of the Perl programming language system itself a) the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version, or b) the "Artistic License" --- The GNU General Public License, Version 1, February 1989 --- This software is Copyright (c) 2015 by Ingy döt Net. This is free software, licensed under: The GNU General Public License, Version 1, February 1989 GNU GENERAL PUBLIC LICENSE Version 1, February 1989 Copyright (C) 1989 Free Software Foundation, Inc. 51 Franklin St, Suite 500, Boston, MA 02110-1335 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The license agreements of most software companies try to keep users at the mercy of those companies. By contrast, our General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. The General Public License applies to the Free Software Foundation's software and to any other program whose authors commit to using it. You can use it for your programs, too. When we speak of free software, we are referring to freedom, not price. Specifically, the General Public License is designed to make sure that you have the freedom to give away or sell copies of free software, that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of a such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must tell them their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any work containing the Program or a portion of it, either verbatim or with modifications. Each licensee is addressed as "you". 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this General Public License and to the absence of any warranty; and give any other recipients of the Program a copy of this General Public License along with the Program. You may charge a fee for the physical act of transferring a copy. 2. You may modify your copy or copies of the Program or any portion of it, and copy and distribute such modifications under the terms of Paragraph 1 above, provided that you also do the following: a) cause the modified files to carry prominent notices stating that you changed the files and the date of any change; and b) cause the whole of any work that you distribute or publish, that in whole or in part contains the Program or any part thereof, either with or without modifications, to be licensed at no charge to all third parties under the terms of this General Public License (except that you may choose to grant warranty protection to some or all third parties, at your option). c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the simplest and most usual way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this General Public License. d) You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. Mere aggregation of another independent work with the Program (or its derivative) on a volume of a storage or distribution medium does not bring the other work under the scope of these terms. 3. You may copy and distribute the Program (or a portion or derivative of it, under Paragraph 2) in object code or executable form under the terms of Paragraphs 1 and 2 above provided that you also do one of the following: a) accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Paragraphs 1 and 2 above; or, b) accompany it with a written offer, valid for at least three years, to give any third party free (except for a nominal charge for the cost of distribution) a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Paragraphs 1 and 2 above; or, c) accompany it with the information you received as to where the corresponding source code may be obtained. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form alone.) Source code for a work means the preferred form of the work for making modifications to it. For an executable file, complete source code means all the source code for all modules it contains; but, as a special exception, it need not include source code for modules which are standard libraries that accompany the operating system on which the executable file runs, or for standard header files or definitions files that accompany that operating system. 4. You may not copy, modify, sublicense, distribute or transfer the Program except as expressly provided under this General Public License. Any attempt otherwise to copy, modify, sublicense, distribute or transfer the Program is void, and will automatically terminate your rights to use the Program under this License. However, parties who have received copies, or rights to use copies, from you under this General Public License will not have their licenses terminated so long as such parties remain in full compliance. 5. By copying, distributing or modifying the Program (or any work based on the Program) you indicate your acceptance of this license to do so, and all its terms and conditions. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. 7. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of the license which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the license, you may choose any version ever published by the Free Software Foundation. 8. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 9. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 10. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS Appendix: How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to humanity, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19xx name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (a program to direct compilers to make passes at assemblers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice That's all there is to it! --- The Artistic License 1.0 --- This software is Copyright (c) 2015 by Ingy döt Net. This is free software, licensed under: The Artistic License 1.0 The Artistic License Preamble The intent of this document is to state the conditions under which a Package may be copied, such that the Copyright Holder maintains some semblance of artistic control over the development of the package, while giving the users of the package the right to use and distribute the Package in a more-or-less customary fashion, plus the right to make reasonable modifications. Definitions: - "Package" refers to the collection of files distributed by the Copyright Holder, and derivatives of that collection of files created through textual modification. - "Standard Version" refers to such a Package if it has not been modified, or has been modified in accordance with the wishes of the Copyright Holder. - "Copyright Holder" is whoever is named in the copyright or copyrights for the package. - "You" is you, if you're thinking about copying or distributing this Package. - "Reasonable copying fee" is whatever you can justify on the basis of media cost, duplication charges, time of people involved, and so on. (You will not be required to justify it to the Copyright Holder, but only to the computing community at large as a market that must bear the fee.) - "Freely Available" means that no fee is charged for the item itself, though there may be fees involved in handling the item. It also means that recipients of the item may redistribute it under the same conditions they received it. 1. You may make and give away verbatim copies of the source form of the Standard Version of this Package without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2. You may apply bug fixes, portability fixes and other modifications derived from the Public Domain or from the Copyright Holder. A Package modified in such a way shall still be considered the Standard Version. 3. You may otherwise modify your copy of this Package in any way, provided that you insert a prominent notice in each changed file stating how and when you changed that file, and provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available, such as by posting said modifications to Usenet or an equivalent medium, or placing the modifications on a major archive site such as ftp.uu.net, or by allowing the Copyright Holder to include your modifications in the Standard Version of the Package. b) use the modified Package only within your corporation or organization. c) rename any non-standard executables so the names do not conflict with standard executables, which must also be provided, and provide a separate manual page for each non-standard executable that clearly documents how it differs from the Standard Version. d) make other distribution arrangements with the Copyright Holder. 4. You may distribute the programs of this Package in object code or executable form, provided that you do at least ONE of the following: a) distribute a Standard Version of the executables and library files, together with instructions (in the manual page or equivalent) on where to get the Standard Version. b) accompany the distribution with the machine-readable source of the Package with your modifications. c) accompany any non-standard executables with their corresponding Standard Version executables, giving the non-standard executables non-standard names, and clearly documenting the differences in manual pages (or equivalent), together with instructions on where to get the Standard Version. d) make other distribution arrangements with the Copyright Holder. 5. You may charge a reasonable copying fee for any distribution of this Package. You may charge any fee you choose for support of this Package. You may not charge a fee for this Package itself. However, you may distribute this Package in aggregate with other (possibly commercial) programs as part of a larger (possibly commercial) software distribution provided that you do not advertise this Package as a product of your own. 6. The scripts and library files supplied as input to or produced as output from the programs of this Package do not automatically fall under the copyright of this Package, but belong to whomever generated them, and may be sold commercially, and may be aggregated with this Package. 7. C or perl subroutines supplied by you and linked into this Package shall not be considered part of this Package. 8. The name of the Copyright Holder may not be used to endorse or promote products derived from this software without specific prior written permission. 9. THIS PACKAGE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. The End Pegex-0.60/Makefile.PL0000644000175000017500000000245212462227077013173 0ustar ingyingy # This file was automatically generated by Dist::Zilla::Plugin::MakeMaker v5.029. use strict; use warnings; use 5.008001; use ExtUtils::MakeMaker; use File::ShareDir::Install; $File::ShareDir::Install::INCLUDE_DOTFILES = 1; $File::ShareDir::Install::INCLUDE_DOTDIRS = 1; install_share dist => "share"; my %WriteMakefileArgs = ( "ABSTRACT" => "Acmeist PEG Parser Framework", "AUTHOR" => "Ingy d\x{f6}t Net ", "CONFIGURE_REQUIRES" => { "ExtUtils::MakeMaker" => 0, "File::ShareDir::Install" => "0.06" }, "DISTNAME" => "Pegex", "EXE_FILES" => [], "LICENSE" => "perl", "MIN_PERL_VERSION" => "5.008001", "NAME" => "Pegex", "PREREQ_PM" => {}, "TEST_REQUIRES" => { "YAML::XS" => 0 }, "VERSION" => "0.60", "test" => { "TESTS" => "t/*.t" } ); my %FallbackPrereqs = ( "ExtUtils::MakeMaker" => 0, "File::ShareDir::Install" => "0.06", "YAML::XS" => 0 ); unless ( eval { ExtUtils::MakeMaker->VERSION(6.63_03) } ) { delete $WriteMakefileArgs{TEST_REQUIRES}; delete $WriteMakefileArgs{BUILD_REQUIRES}; $WriteMakefileArgs{PREREQ_PM} = \%FallbackPrereqs; } delete $WriteMakefileArgs{CONFIGURE_REQUIRES} unless eval { ExtUtils::MakeMaker->VERSION(6.52) }; WriteMakefile(%WriteMakefileArgs); { package MY; use File::ShareDir::Install qw(postamble); } Pegex-0.60/README0000644000175000017500000000711712462227077012104 0ustar ingyingyNAME Pegex - Acmeist PEG Parser Framework VERSION This document describes Pegex version 0.60. "; SYNOPSIS use Pegex; my $result = pegex($grammar)->parse($input); or with options: use Pegex; use ReceiverClass; my $parser = pegex($grammar, 'ReceiverClass'); my $result = $parser->parse($input); or more explicitly: use Pegex::Parser; use Pegex::Grammar; my $pegex_grammar = Pegex::Grammar->new( text => $grammar, ); my $parser = Pegex::Parser->new( grammar => $pegex_grammar, ); my $result = $parser->parse($input); or customized explicitly: { package MyGrammar; use Pegex::Base; extends 'Pegex::Grammar'; has text => "your grammar definition text goes here"; has receiver => "MyReceiver"; } { package MyReceiver; use base 'Pegex::Receiver'; got_some_rule { ... } got_other_rule { ... } } use Pegex::Parser; my $parser = Pegex::Parser->new( grammar => MyGrammar->new, receiver => MyReceiver->new, ); $parser->parse($input); my $result = $parser->receiver->data; DESCRIPTION Pegex is an Acmeist parser framework. It allows you to easily create parsers that will work equivalently in lots of programming languages! The inspiration for Pegex comes from the parsing engine upon which the postmodern programming language Perl 6 is based on. Pegex brings this beauty to the other justmodern languages that have a normal regular expression engine available. Pegex gets it name by combining Parsing Expression Grammars (PEG), with Regular Expessions (Regex). That's actually what Pegex does. PEG is the cool new way to elegantly specify recursive descent grammars. The Perl 6 language is defined in terms of a self modifying PEG language called Perl 6 Rules. Regexes are familiar to programmers of most modern programming languages. Pegex defines a simple PEG syntax, where all the terminals are regexes. This means that Pegex can be quite fast and powerful. Pegex attempts to be the simplest way to define new (or old) Domain Specific Languages (DSLs) that need to be used in several programming languages and environments. Things like JSON, YAML, Markdown etc. It also great for writing parsers/compilers that only need to work in one language. USAGE The Pegex.pm module itself (this module) is just a trivial way to use the Pegex framework. It is only intended for the simplest of uses. This module exports a single function, pegex, which takes a Pegex grammar string as input. You may also pass a receiver class name after the grammar. my $parser = pegex($grammar, 'MyReceiver'); The pegex function returns a Pegex::Parser object, on which you would typically call the parse() method, which (on success) will return a data structure of the parsed data. See Pegex::API for more details. SEE ALSO * Pegex::Overview * Pegex::API * Pegex::Syntax * Pegex::Tutorial * Pegex::Resources * http://github.com/ingydotnet/pegex-pm * irc://freenode.net#pegex AUTHOR Ingy döt Net COPYRIGHT AND LICENSE Copyright 2010-2015. Ingy döt Net. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See http://www.perl.com/perl/misc/Artistic.html Pegex-0.60/t/0000755000175000017500000000000012462227077011461 5ustar ingyingyPegex-0.60/t/repeat.t0000644000175000017500000000033612462227077013130 0ustar ingyingyuse Test::More; use Pegex; my $parser = pegex('a: /*?(x+)*/'); is $parser->parse('xxxx')->{a}, 'xxxx', 'First parse works'; is $parser->parse('xxxx')->{a}, 'xxxx', 'Second parse works'; done_testing; Pegex-0.60/t/TestAST.pm0000644000175000017500000000022512462227077013305 0ustar ingyingypackage TestAST; use Pegex::Base; extends 'Pegex::Tree'; sub got_zero { return 0 }; sub got_empty { return '' }; sub got_undef { return undef } 1; Pegex-0.60/t/testml-compiler.t0000644000175000017500000000062112462227077014765 0ustar ingyingy# DO NOT EDIT # # This file was generated by TestML::Setup (0.50) # # > perl -MTestML::Setup -e setup test/testml.yaml use strict; use lib (-e 't' ? 't' : 'test'), 'inc'; use File::Spec; use TestML; use TestML::Compiler::Lite; use TestMLBridge; TestML->new( testml => File::Spec->catfile(qw{testml compiler.tml}), bridge => 'TestMLBridge', compiler => 'TestML::Compiler::Lite', )->run; Pegex-0.60/t/export-api.t0000644000175000017500000000140112462227077013732 0ustar ingyingy# BEGIN { $Pegex::Parser::Debug = 1 } use Test::More tests => 8; use Pegex; ok defined(&pegex), 'pegex is exported'; my $parser1 = pegex("foo: \n"); is ref($parser1), 'Pegex::Parser', 'pegex returns a Pegex::Parser object'; is $parser1->grammar->tree->{'+toprule'}, 'foo', 'pegex() contains a grammar with a compiled tree'; my $parser2 = pegex(<<'...'); number: /+/ ... eval { $parser2->parse('123'); pass '$parser2->parse worked'; }; fail $@ if $@; is ref $parser2, 'Pegex::Parser', 'grammar property is Pegex::Parser object'; my $tree2 = $parser2->grammar->tree; ok $tree2, 'Grammar object has tree'; ok ref($tree2), 'Grammar object is compiled to a tree'; is $tree2->{'+toprule'}, 'number', '_FIRST_RULE is set correctly'; Pegex-0.60/t/TestMLBridge.pm0000644000175000017500000000412312462227077014304 0ustar ingyingy# BEGIN { $Pegex::Parser::Debug = 1 } use strict; use warnings; package TestMLBridge; use base 'TestML::Bridge'; use TestML::Util; use Pegex; use Pegex::Compiler; use Pegex::Bootstrap; use Pegex::Tree; use Pegex::Tree::Wrap; use TestAST; use YAML::XS; sub compile { my ($self, $grammar) = @_; my $tree = Pegex::Compiler->new->parse($grammar->value)->tree; delete $tree->{'+toprule'}; delete $tree->{'_'}; delete $tree->{'__'}; return native $tree; } sub bootstrap_compile { my ($self, $grammar) = @_; my $tree = Pegex::Bootstrap->new->parse($grammar->value)->tree; delete $tree->{'+toprule'}; delete $tree->{'_'}; delete $tree->{'__'}; return native $tree; } sub compress { my ($self, $grammar) = @_; $grammar = $grammar->value; $grammar =~ s/(?% % value; return str YAML::XS::Dump($tree); } sub clean { my ($self, $yaml) = @_; $yaml = $yaml->value; $yaml =~ s/^---\s//; $yaml =~ s/'(\d+)'/$1/g; $yaml =~ s/^- ~$/- /gm; return str $yaml; } sub parse_input { my ($self, $grammar, $input) = @_; my $parser = pegex($grammar->value); return native $parser->parse($input->value); } sub parse_to_tree { my ($self, $grammar, $input) = @_; require Pegex::Tree; $::testing = 0; # XXX my $parser = pegex($grammar->value, 'Pegex::Tree'); $parser->grammar->tree; # use XXX; XXX $parser->grammar->tree; $::testing = 1; # XXX return native $parser->parse($input->value); } sub parse_to_tree_wrap { my ($self, $grammar, $input) = @_; $::testing = 0; # XXX my $parser = pegex($grammar->value, 'Pegex::Tree::Wrap'); $parser->grammar->tree; $::testing = 1; # XXX return native $parser->parse($input->value); } sub parse_to_tree_test { my ($self, $grammar, $input) = @_; my $parser = pegex($grammar->value, 'TestAST'); return native $parser->parse($input->value); } 1; Pegex-0.60/t/grammar-api.t0000644000175000017500000000051312462227077014042 0ustar ingyingyuse Test::More tests => 1; package MyGrammar1; use Pegex::Base; extends 'Pegex::Grammar'; has start_rules => []; use constant text => <<'...'; foo: /xyz/ bar: /abc/ | baz: /def/ ... package main; my $g1 = MyGrammar1->new; is $g1->tree->{'+toprule'}, 'foo', 'MyGrammar1 compiled a tree from its text'; Pegex-0.60/t/testml/0000755000175000017500000000000012462227077012771 5ustar ingyingyPegex-0.60/t/testml/optimize.tml0000644000175000017500000000034612462227077015352 0ustar ingyingy%TestML 0.1.0 # XXX Skipping this test for now. Might need a %Skip in TestML '1' == '1' # *grammar.compile.optimize.yaml.clean == *yaml === Question Mark Expansion --- SKIP --- grammar a: /(:foo)/ --- yaml a: .rgx: /(?:foo)/ Pegex-0.60/t/testml/tree-pegex.tml0000644000175000017500000000233512462227077015557 0ustar ingyingy%TestML 0.1.0 Plan = 2 Label = '$BlockLabel - Pegex::Tree' parse_to_tree(*grammar, *input).yaml.clean == *tree Label = '$BlockLabel - Pegex::Tree::Wrap' parse_to_tree_wrap(*grammar, *input).yaml.clean == *wrap Label = '$BlockLabel - TestAST' parse_to_tree_test(*grammar, *input).yaml.clean == *ast === Part of Pegex Grammar --- grammar \# This is the Pegex grammar for Pegex grammars! grammar: ( * )+ * rule_definition: /*/ /*/ rule_name: /(*)/ comment: // line: /*/ rule_line: /()/ --- input \# This is the Pegex grammar for Pegex grammars! grammar: ( * )+ * rule_definition: /*/ /*/ --- tree - - - [] - - grammar - ( * )+ * - - [] - - rule_definition - /*/ /*/ - [] --- wrap grammar: - - - [] - rule_definition: - rule_name: grammar - rule_line: ( * )+ * - - [] - rule_definition: - rule_name: rule_definition - rule_line: /*/ /*/ - [] Pegex-0.60/t/testml/error.tml0000644000175000017500000000512112462227077014637 0ustar ingyingy%TestML 0.1.0 parse_input(*grammar, *input).Catch ~~ *error === Error fails at furthest match # XXX This one not testing much. --- grammar a: b+ c b: /b/ c: /c/ --- input bbbbddddd --- error: ddddd\n ### TODO ### # === Pegex: Illegal meta rule # --- grammar # %grammar Test # %foobar Quark # a: /a+/ # --- input # aaa # --- error: Illegal meta rule === Pegex: Rule header syntax error --- grammar a|: /a+/ --- input aaa --- error: Rule header syntax error === Pegex: Rule ending syntax error --- grammar a: /a+/ | --- input aaa --- error: Rule ending syntax error === Pegex: Illegal rule modifier --- grammar a: /a+/ b: ^1-2 --- input aaa --- error: Illegal rule modifier === Pegex: Missing > in rule reference --- grammar a: /a+/ b: ! in rule reference === Pegex: Missing < in rule reference --- grammar a: /a+/ b: !a>1-2 --- input aaa --- error: Rule ending syntax error # --- error: Missing < in rule reference === Pegex: Illegal character in rule quantifier --- grammar a: /a+/ b: !a^1-2 --- input aaa --- error: Rule ending syntax error # --- error: Illegal character in rule quantifier === Pegex: Unprotected rule name with numeric quantifier --- grammar a: /a+/ b: !a1-2 --- input aaa --- error: Parse document failed for some reason # --- error: Rule ending syntax error # --- error: Unprotected rule name with numeric quantifier === Pegex: Runaway regular expression --- grammar a: /a+ --- input aaa --- error: Runaway regular expression === Pegex: Illegal group rule modifier --- grammar a: /a+/ b: !(a =)1-2 --- input aaa --- error: Illegal group rule modifier === Pegex: Runaway rule group --- grammar a: /a+/ b: .(a =1-2 --- input aaa --- error: Runaway rule group === Pegex: Illegal character in group rule quantifier --- grammar a: /a+/ b: .(a =)^2 --- input aaa --- error: Rule ending syntax error # --- error: Illegal character in group rule quantifier === Pegex: Multi-line error messages not allowed --- grammar a: /a+/ b: `This is legal` c: `This is illegal` --- input aaa --- error: Multi-line error messages not allowed === Pegex: Runaway error message --- grammar a: /a+/ b: `This is legal` c: `This is illegal --- input aaa --- error: Runaway error message === Pegex: Leading separator form (BOK) no longer supported --- grammar a: /a+/ %%% ~ --- input aaa --- error: Rule ending syntax error # --- error: Leading separator form (BOK) no longer supported === Pegex: Illegal characters in separator indicator --- grammar a: /a+/ %%~%%^%% ~ --- input aaa --- error: Rule ending syntax error # --- error: Illegal characters in separator indicator Pegex-0.60/t/testml/tree.tml0000644000175000017500000001125512462227077014452 0ustar ingyingy%TestML 0.1.0 Plan = 58 Label = '$BlockLabel - Pegex::Tree' parse_to_tree(*grammar, *input).yaml.clean == *tree Label = '$BlockLabel - Pegex::Tree::Wrap' parse_to_tree_wrap(*grammar, *input).yaml.clean == *wrap Label = '$BlockLabel - t::TestAST' parse_to_tree_test(*grammar, *input).yaml.clean == *ast === Single Regex - Single Capture --- grammar a: /x*(y*)z* EOL/ --- input xxxyyyyzzz --- tree yyyy --- wrap a: yyyy === Single Regex - Multi Capture --- grammar a: /(x*)(y*)(z*) EOL/ --- input xxxyyyyzzz --- tree - xxx - yyyy - zzz --- wrap a: - xxx - yyyy - zzz === Multi Group Regex --- grammar t: /.*(x).*(y).*(z).*/ --- input: aaaxbbbyccczddd --- tree - x - y - z --- wrap t: - x - y - z === Single Regex - No Capture --- grammar a: /x*y*z* EOL/ --- input xxxyyyyzzz --- tree [] --- wrap a: [] === Non capture Regex --- grammar a: b b* -c* .d* b: /b/ c: /c+/ d: /d/ --- input: bbccdd --- tree - [] --- wrap a: - [] === A subrule --- grammar a: b /(y+)/ EOL b: /(x+)/ --- input xxxyyyy --- tree - xxx - yyyy --- wrap a: - b: xxx - yyyy === Multi match regex in subrule --- grammar a: b b: /(x*)y*(z*) EOL/ --- input xxxyyyyzzz --- tree - xxx - zzz --- wrap a: b: - xxx - zzz === Any rule group --- grammar a: (b | c) b: /(bleh)/ c: /(x*)y*(z*) EOL?/ --- input xxxyyyyzzz --- tree - xxx - zzz --- wrap a: c: - xxx - zzz === + Modifier --- grammar a: ( b c )+ EOL b: /(x*)/ c: /(y+)/ --- input xxyyxy --- tree - - - xx - yy - - x - y --- wrap a: - - - b: xx - c: yy - - b: x - c: y === Wrap Pass and Skip --- grammar a: +b -c .d b: /(b+)/ c: /(c+)/ d: /(d+)/ --- input: bbccdd --- tree - b: bb - cc --- wrap a: - b: bb - c: cc === Flat and Skip Multi --- grammar a: b* -c* .d* b: /(b)/ c: /(c)/ d: /(d)/ --- input: bccdd --- tree - - b - c - c --- wrap a: - - b: b - c: c - c: c === Skip Bracketed --- grammar a: b .(c d) b: /(b)/ c: /(c+)/ d: /(d+)/ --- input: bcccd --- tree b --- wrap a: b: b === Assertions --- grammar a: !b =c c b: /b/ c: /(c+)/ --- input: ccc --- tree ccc --- wrap a: c: ccc === Assertion not captured --- grammar a: =x x y EOL x: /(x+)/ y: /(y+)/ --- input xxxyyyy --- tree - xxx - yyyy --- wrap a: - x: xxx - y: yyyy === Empty regex group plus rule --- grammar a: b* c EOL b: /xxx/ c: /(yyy)/ --- input xxxyyy --- tree - [] - yyy --- wrap a: - [] - c: yyy === Rule to Rule to Rule --- grammar a: b b: c* c: d EOL d: /x(y)z/ --- input xyz xyz --- tree - - y - - y --- wrap a: b: - c: - d: y - c: - d: y === List and Separators --- grammar a: b c+ % d b: /(b)/ c: /(c+)/ d: /(d+)/ --- input: bcccdccddc --- tree - b - - ccc - d - cc - dd - c --- wrap a: - b: b - - c: ccc - d: d - c: cc - d: dd - c: c === Rule with Separator --- grammar a: c* % d c: /(c+)/ d: /d+/ --- input: cccdccddc --- tree - ccc - cc - c --- wrap a: - c: ccc - c: cc - c: c === List without Separators --- grammar a: b c* % d b b: /(b)/ c: /(c+)/ d: /d+/ --- input: bb --- tree - b - [] - b --- wrap a: - b: b - [] - b: b === Whitespace Matchers --- grammar TOP: / ws*( DOT ) - ( DOT* ) -/ --- input . .. --- tree - . - .. --- wrap TOP: - . - .. === Automatically Pass TOP --- grammar b: /(b)/ TOP: b c* c: /(c)/ --- input: bcc --- tree - b - - c - c --- wrap TOP: - b: b - - c: c - c: c === Empty Stars --- grammar a: ( b* c )+ b* b: /(b)/ c: /(c+)/ --- input: cc --- tree - - - [] - cc - [] --- wrap a: - - - [] - c: cc - [] === Exact Quantifier --- grammar a: 3 b: /(b)/ --- input: bbb --- tree - b - b - b --- wrap a: - b: b - b: b - b: b === Quantifier with Separator --- grammar a: 2-4 %% /,/ b: /(b)/ --- input: b,b,b, --- tree - b - b - b --- wrap a: - b: b - b: b - b: b === Quantifier with Separator, Trailing OK --- grammar a: 2-4 %% /,/ b: /(b)/ --- input: b,b,b, --- tree - b - b - b --- wrap a: - b: b - b: b - b: b === Quantifier on the Separator --- grammar a: 2-4 %% c* b: /(b)/ c: / COMMA / --- input: b,b,,,,bb, --- tree - b - [] - b - [] - b - [] - b --- wrap a: - b: b - [] - b: b - [] - b: b - [] - b: b === Tilde matching --- grammar a: - b + b+ b: /(b)/ c: / COMMA / --- input: b bb --- tree - b - - b - b --- wrap a: - b: b - - b: b - b: b === False Values --- grammar a: zero empty undef zero: /(b+)/ empty: /(c+)/ undef: /(d+)/ --- input: bbccdd --- ast - 0 - '' - === Wrap --- grammar a: b c d b: /(b+)/ c: /(c+)/ d: /(d+)/ --- input: bbccdd --- wrap a: - b: bb - c: cc - d: dd === 2 + 1 --- SKIP --- grammar a: 2 b b: /(b)/ --- input: bbb --- ast - b - b - b === Separated Group --- grammar a: (b | c)+ % d b: /(b)/ c: /(c)/ d: /(d)/ --- input: bdcdb --- ast - b - d - c - d - b === Separator Group --- grammar a: b+ %% (c | d) b: /(b)/ c: /(c)/ d: /(d)/ --- input: bdbcbc --- ast - b - d - b - c - b - c Pegex-0.60/t/testml/compiler-equivalence.tml0000644000175000017500000000514712462227077017627 0ustar ingyingy%TestML 0.1.0 Plan = 50 *grammar1.bootstrap_compile.yaml == *grammar2.bootstrap_compile.yaml *grammar1.compile.yaml == *grammar2.compile.yaml === Simple Test Case --- grammar1 a: /x/ --- grammar2 a: /x/ === Token Per Line --- SKIP: TODO --- grammar1 a: /b/ --- grammar2 a : /b/ === And over Or Precedence --- grammar1 a: b c | d --- grammar2 a: ( b c ) | d === And/Or Precedence with joining --- grammar1 a: b % c | d %% e --- grammar2 a: ( b % c ) | ( d %% e ) === And/Or Precedence with grouping --- grammar1 a: b c | ( d | e | f g h i ) --- grammar2 a: ( b c ) | ( d | e | ( f g h i ) ) === Extra Leading Pipe --- grammar1 a: | b | c --- grammar2 a: b | c === Extra Leading Pipe w/ Parens --- grammar1 a: ( | b | c ) --- grammar2 a: b | c === In-Line Comments --- grammar1 a: # test b c # not d /q/ # skipping to q % e # using e here... ; # comment -> semicolon test --- grammar2 a: b c /q/ % e === Dashes in names --- grammar1 a-a: b-b+ c-c b_b: c_c-c /d/ c-c_c: e --- grammar2 a_a: b_b+ c_c b-b: c-c_c /d/ c_c_c: e === Whitespace Tokens --- grammar1 a: - b + c - b: /- cat + dog -/ --- grammar2 a: _ b __ c _ b: /<_> cat <__> dog <_>/ === Regex Combination --- SKIP: TODO --- grammar1: a: /b/ /c/ --- grammar2: a: /bc/ === Separator Reduction a % b --- grammar1 x: a % b --- grammar2 x: a === Separator Reduction a %% b --- grammar1 x: a %% b --- grammar2 x: a b? === Separator Reduction a? % b --- grammar1 x: a? % b --- grammar2 x: a? === Separator Reduction a? %% b --- grammar1 x: a? %% b --- grammar2 x: a? b? === Separator Reduction a* % b --- grammar1 x: a* % b --- grammar2 x: (a -(b a)*)? === Separator Reduction a* %% b --- grammar1 x: a* %% b --- grammar2 x: (a -(b a)* b?)? === Separator Reduction a+ % b --- grammar1 x: a+ % b --- grammar2 x: a -(b a)* === Separator Reduction a+ %% b --- grammar1 x: a+ %% b --- grammar2 x: a -(b a)* b? === Separator Reduction 3 % b --- grammar1 x: 3 % b --- grammar2 x: a -(b a)2 === Separator Reduction 3 %% b --- grammar1 x: 3 %% b --- grammar2 x: a -(b a)2 b? === Separator Reduction 3+ % b --- grammar1 x: 3+ % b --- grammar2 x: a -(b a)2+ === Separator Reduction 3+ %% b --- grammar1 x: 3+ %% b --- grammar2 x: a -(b a)2+ b? === Separator Reduction 3-7 % b --- grammar1 x: 3-7 % b --- grammar2 x: a -(b a)2-6 === Separator Reduction 3-7 %% b --- grammar1 x: 3-7 %% b --- grammar2 x: a -(b a)2-6 b? === Leading WS, mulitline regex --- grammar1 a: /- 'foo' / --- grammar2 a: /- 'foo'/ === Leading WS, mulitline regex (part2) --- grammar1 a: /- 'foo' / --- grammar2 a: /-foo/ Pegex-0.60/t/testml/compiler.tml0000644000175000017500000000334012462227077015321 0ustar ingyingy%TestML 0.1.0 Plan = 23 Label = '$BlockLabel - Compiler output matches bootstrap?' *grammar.compile.yaml == *grammar.bootstrap_compile.yaml === Single Regex --- grammar a: /x/ === Single Reference --- grammar a: === Single Error --- grammar a: `b` === Simple All Group --- grammar a: /b/ === Simple Any Group --- grammar a: | === Bracketed All Group --- grammar a: ( /c/ ) === Bracketed Any Group --- grammar a: ( | /c/ | `d` ) === Bracketed Group in Unbracketed Group --- grammar a: ( | ) === And over Or Precedence --- SKIP --- grammar a: | | % === Multiple Rules --- grammar a: b: === Simple Grammar --- grammar a: ( * ) b: /x/ c: /y+/ === Semicolons OK --- grammar a: ; b: c: /d/; === Unbracketed --- grammar a: b: | === Not Rule --- grammar a: ! === Multiline --- grammar a: b: /c/ ; c: | ( /e/ ) | `g` === Various Groups --- grammar a: ( | ) b: ( | ) c: | ( ) | d: | ( ) | | ( `i` ) e: ( ) === Modifiers --- grammar a: ! = b: ( /c/ )+ c: ( /c/ )+ === Any Group Plus Rule --- grammar a: /w/ ( + | * ) ? === Equivalent --- grammar a: c: ! === Regex and Rule --- grammar a_b: /c/ === Quantified group --- grammar a: ( * | + )+ e: ( ! )? === Multiple Regex --- grammar b: ( /x/ )+ # XXX The \\# looks lie a testml bug. Should only need \#. === Comments --- grammar \\# line comment a: b # end of line comment b: / foo # regex comment # regex line comment bar # regex line comment / === Comment between rules --- grammar a: b \\# comment b: c Pegex-0.60/t/testml/compiler-checks.tml0000644000175000017500000001254412462227077016565 0ustar ingyingy%TestML 0.1.0 Plan = 74 Diff = 1 Label = "$BlockLabel (bootstrap compile)" *grammar.bootstrap_compile.yaml.clean == *yaml Label = "$BlockLabel (pegex compile)" *grammar.compile.yaml.clean == *yaml === Empty Grammar --- grammar --- yaml {} === Whitespace Tokens --- grammar a: - b+ + c - b: /- 'cat' + 'dog' -/ --- yaml a: .all: - .ref: _ - +min: 1 .ref: b - .ref: __ - .ref: c - .ref: _ b: .rgx: <_>cat<__>dog<_> === Simple Grammar --- grammar a: ( b c* )+ b: /x/ c: x --- yaml a: +min: 1 .all: - .ref: b - +min: 0 .ref: c b: .rgx: x c: .ref: x === Dash in Rule Names --- grammar a-b: c-d --- yaml a_b: .ref: c_d === Single Rule Reference --- grammar a: x --- yaml a: .ref: x === Single Rule brackets --- grammar a: --- yaml a: .ref: x === All Rules --- grammar a: x y z --- yaml a: .all: - .ref: x - .ref: y - .ref: z === Any Rules --- grammar a: x | y | z --- yaml a: .any: - .ref: x - .ref: y - .ref: z === Any Rules with Leading Pipe --- grammar a: | x | y | z --- yaml a: .any: - .ref: x - .ref: y - .ref: z === Separator Syntax --- grammar a: b+ % c | d* %% e --- yaml a: .any: - .all: - .ref: b - +min: 0 -flat: 1 .all: - .ref: c - .ref: b - +max: 1 .all: - .ref: d - +min: 0 -flat: 1 .all: - .ref: e - .ref: d - +max: 1 .ref: e === Complex All/Any Precedence --- grammar a: b c | ( d | e | f* % g h ) i --- yaml a: .any: - .all: - .ref: b - .ref: c - .all: - .any: - .ref: d - .ref: e - .all: - +max: 1 .all: - .ref: f - +min: 0 -flat: 1 .all: - .ref: g - .ref: f - .ref: h - .ref: i === Single Rule With Trailing Quantifier --- grammar a: x* --- yaml a: +min: 0 .ref: x === Single Rule With Trailing Quantifier (no angles) --- grammar a: x* --- yaml a: +min: 0 .ref: x === Single Rule With Leading Assertion --- grammar a: =x --- yaml a: +asr: 1 .ref: x === Negative and Positive Assertion --- grammar a: !b =c --- yaml a: .all: - +asr: -1 .ref: b - +asr: 1 .ref: c === Single Regex --- grammar a: /x/ --- yaml a: .rgx: x === Quoted Regex --- grammar a: '*** + - bar ' --- yaml a: .rgx: '\*\*\*\ \ \ \+\ \-\ bar\ ' === Quoted String in Regex --- grammar a: /('(foo*)')*/ --- yaml a: .rgx: (\(foo\*\))* === Single Error --- grammar a: `x` --- yaml a: .err: x === Skip and Wrap Marker --- grammar a: .b +c+ -d? --- yaml a: .all: - -skip: 1 .ref: b - +min: 1 -wrap: 1 .ref: c - +max: 1 -flat: 1 .ref: d === Unbracketed All Group --- grammar a: /x/ y --- yaml a: .all: - .rgx: x - .ref: y === Unbracketed Any Group --- grammar a: /x/ | y | `z` --- yaml a: .any: - .rgx: x - .ref: y - .err: z === Any Group with Leading Pipe --- grammar a: ( | b | c ) --- yaml a: .any: - .ref: b - .ref: c === Bracketed All Group --- grammar a: ( x y ) --- yaml a: .all: - .ref: x - .ref: y === Bracketed Group With Trailing Modifier --- grammar a: ( x y )? --- yaml a: +max: 1 .all: - .ref: x - .ref: y === Bracketed Group With Leading Modifier --- grammar a: .( =x y ) --- yaml a: -skip: 1 .all: - +asr: 1 .ref: x - .ref: y === Multiple Groups --- grammar a: ( x y ) ( z | /zzz/ ) --- yaml a: .all: - .all: - .ref: x - .ref: y - .any: - .ref: z - .rgx: zzz === List Separator --- grammar a: b | c+ %% /d/ --- yaml a: .any: - .ref: b - .all: - .ref: c - +min: 0 -flat: 1 .all: - .rgx: d - .ref: c - +max: 1 .rgx: d === Separators with Quantifiers --- grammar a: 2+ % c* d* %% 2-3 --- yaml a: .all: - .all: - .ref: b - +min: 1 -flat: 1 .all: - +min: 0 .ref: c - .ref: b - +max: 1 .all: - .ref: d - +min: 0 -flat: 1 .all: - +max: 3 +min: 2 .ref: e - .ref: d - +max: 1 +min: 2 .ref: e === All Quantifier Forms --- grammar a: b c? d* e+ 55 5+ 5-55 --- yaml a: .all: - .ref: b - +max: 1 .ref: c - +min: 0 .ref: d - +min: 1 .ref: e - +max: 55 +min: 55 .ref: f - +min: 5 .ref: g - +max: 55 +min: 5 .ref: h === Whitespace Tokens --- grammar a: - b+ + c - b: /- cat + dog -/ c: /+/ d: /+ kitty/ --- yaml a: .all: - .ref: _ - +min: 1 .ref: b - .ref: __ - .ref: c - .ref: _ b: .rgx: <_><__><_> c: .rgx: <__> d: .rgx: <__> === Whitespace in Regex --- grammar a: /* ({3}) / --- yaml a: .rgx: *({3}) # Drop support for -- === Dash and Plus as whitespace tokens --- grammar a: / - foo + bar+ -- baz / --- yaml a: .rgx: <_><__>+<__> === Directives --- grammar \%grammar foo \%version 1.2.3 --- yaml +grammar: foo +version: 1.2.3 === Multiple Duplicate Directives --- grammar \%grammar foo \%include bar \%include baz --- yaml +grammar: foo +include: - bar - baz === Meta Lines --- grammar \%grammar foo \%version 1.1.1 \%extends bar bar \%include bazzy a: /b/ --- yaml +extends: bar bar +grammar: foo +include: bazzy +version: 1.1.1 a: .rgx: b === Dash and Plus as whitespace tokens --- grammar a: / - foo + bar+ -- baz / --- yaml a: .rgx: <_><__>+<__> Pegex-0.60/t/api.t0000644000175000017500000000077612462227077012431 0ustar ingyingyuse Test::More; use Pegex::Parser; use Pegex::Grammar; use Pegex::Receiver; use Pegex::Input; my $p = Pegex::Parser->new( grammar => Pegex::Grammar->new, receiver => Pegex::Receiver->new, input => Pegex::Input->new, debug => 1, ); ok $p->grammar, 'grammar accessor works'; ok $p->receiver, 'receiver accessor works'; ok $p->input, 'input accessor works'; ok $p->debug, 'debug accessor works'; eval { Pegex::Parser->new }; ok $@ =~ /grammar required/, 'grammar is required'; done_testing; Pegex-0.60/t/release-pod-syntax.t0000644000175000017500000000045612462227077015377 0ustar ingyingy#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } # This file was automatically generated by Dist::Zilla::Plugin::PodSyntaxTests. use Test::More; use Test::Pod 1.41; all_pod_files_ok(); Pegex-0.60/t/testml-tree.t0000644000175000017500000000061512462227077014115 0ustar ingyingy# DO NOT EDIT # # This file was generated by TestML::Setup (0.50) # # > perl -MTestML::Setup -e setup test/testml.yaml use strict; use lib (-e 't' ? 't' : 'test'), 'inc'; use File::Spec; use TestML; use TestML::Compiler::Lite; use TestMLBridge; TestML->new( testml => File::Spec->catfile(qw{testml tree.tml}), bridge => 'TestMLBridge', compiler => 'TestML::Compiler::Lite', )->run; Pegex-0.60/t/sample.t0000644000175000017500000000230512462227077013127 0ustar ingyingyuse Test::More; eval "use YAML::XS; 1" or plan skip_all => 'YAML::XS required'; plan tests => 1; my $grammar_text = <<'...'; contact: name_section phone_section address_section name_section: / 'Name' + / name EOL name: /(+)(+)/ phone_section: /Phone+/ phone_number: term address_section: /Address/ street_line city_line country_line? street_line: indent street EOL street: /*/ city_line: indent city EOL city: term country_line: indent country EOL country: term term: /( # NS is "non-space" * )/ indent: /{2}/ ... my $input = <<'...'; Name: Ingy Net Phone: 919-876-5432 Address: 1234 Main St Niceville OK ... my $want = <<'...'; ... use Pegex::Grammar; use Pegex::Receiver; use Pegex::Compiler; my $grammar = Pegex::Grammar->new( tree => Pegex::Compiler->new->compile($grammar_text)->tree, ); my $parser = Pegex::Parser->new( grammar => $grammar, receiver => Pegex::Receiver->new, debug => 1, ); my $ast1 = $parser->parse($input); pass 'parsed'; exit; my $got = YAML::XS::Dump($ast1); is $got, $want, 'It works'; Pegex-0.60/t/safe.t0000644000175000017500000000023412462227077012563 0ustar ingyingyuse Test::More; use Safe; BEGIN { Safe->new } use Pegex; pegex('a: /a/')->parse('a'); pass 'GitHub ingydotnet/jsony-pm issue #2 fixed'; done_testing; Pegex-0.60/t/flatten.t0000644000175000017500000000100012462227077013272 0ustar ingyingyuse Test::More tests => 1; use Pegex; my $grammar = <<'...'; a: (((b)))+ b: (c | d) c: /(x)/ d: /y/ ... { package R; use base 'Pegex::Receiver'; sub got_a { my ($self, $got) = @_; $self->flatten($got); $got; } sub got_b { my ($self, $got) = @_; [$got]; } sub got_c { my ($self, $got) = @_; [$got]; } } my $parser = pegex($grammar, 'R'); my $got = $parser->parse('xxx'); is join('', @$got), 'xxx', 'Array was flattened'; Pegex-0.60/t/000-compile-modules.t0000644000175000017500000000044112462227077015240 0ustar ingyingy# This test does a basic `use` check on all the code. use Test::More; use File::Find; sub test { s{^lib/(.*)\.pm$}{$1} or return; s{/}{::}g; use_ok $_; } $ENV{PERL_ZILD_TEST_000_COMPILE_MODULES} = 1; find { wanted => \&test, no_chdir => 1, }, 'lib'; done_testing; Pegex-0.60/t/parse.t0000644000175000017500000000043712462227077012764 0ustar ingyingy# $Pegex::Parser::Debug = 1; my $t; use lib ($t = -e 't' ? 't' : 'test'); use Test::More tests => 1; use Pegex; use Pegex::Input; $grammar_file = "$t/mice.pgx"; eval { pegex( Pegex::Input->new(file => $grammar_file) )->parse("3 blind mice\n") }; $@ ? fail $@ : pass "! works"; Pegex-0.60/t/testml-optimize.t0000644000175000017500000000062112462227077015013 0ustar ingyingy# DO NOT EDIT # # This file was generated by TestML::Setup (0.50) # # > perl -MTestML::Setup -e setup test/testml.yaml use strict; use lib (-e 't' ? 't' : 'test'), 'inc'; use File::Spec; use TestML; use TestML::Compiler::Lite; use TestMLBridge; TestML->new( testml => File::Spec->catfile(qw{testml optimize.tml}), bridge => 'TestMLBridge', compiler => 'TestML::Compiler::Lite', )->run; Pegex-0.60/t/mice.pgx0000644000175000017500000000013412462227077013114 0ustar ingyingyphrase: !one number - things EOL one: /1/ number: DIGIT things: / 'blind' SPACE 'mice' / Pegex-0.60/t/testml-compiler-equivalence.t0000644000175000017500000000063512462227077017271 0ustar ingyingy# DO NOT EDIT # # This file was generated by TestML::Setup (0.50) # # > perl -MTestML::Setup -e setup test/testml.yaml use strict; use lib (-e 't' ? 't' : 'test'), 'inc'; use File::Spec; use TestML; use TestML::Compiler::Lite; use TestMLBridge; TestML->new( testml => File::Spec->catfile(qw{testml compiler-equivalence.tml}), bridge => 'TestMLBridge', compiler => 'TestML::Compiler::Lite', )->run; Pegex-0.60/t/testml-tree-pegex.t0000644000175000017500000000062312462227077015222 0ustar ingyingy# DO NOT EDIT # # This file was generated by TestML::Setup (0.50) # # > perl -MTestML::Setup -e setup test/testml.yaml use strict; use lib (-e 't' ? 't' : 'test'), 'inc'; use File::Spec; use TestML; use TestML::Compiler::Lite; use TestMLBridge; TestML->new( testml => File::Spec->catfile(qw{testml tree-pegex.tml}), bridge => 'TestMLBridge', compiler => 'TestML::Compiler::Lite', )->run; Pegex-0.60/t/testml-compiler-checks.t0000644000175000017500000000063012462227077016223 0ustar ingyingy# DO NOT EDIT # # This file was generated by TestML::Setup (0.50) # # > perl -MTestML::Setup -e setup test/testml.yaml use strict; use lib (-e 't' ? 't' : 'test'), 'inc'; use File::Spec; use TestML; use TestML::Compiler::Lite; use TestMLBridge; TestML->new( testml => File::Spec->catfile(qw{testml compiler-checks.tml}), bridge => 'TestMLBridge', compiler => 'TestML::Compiler::Lite', )->run; Pegex-0.60/t/function-rule.t0000644000175000017500000000136612462227077014446 0ustar ingyingyuse Test::More; use Pegex::Parser; { package G; use base 'Pegex::Grammar'; sub rule_a { my ($self, $parser, $input) = @_; return; } sub rule_b { my ($self, $parser, $buffer, $pos) = @_; return $parser->match_rule(3, ['aaa', $$buffer]); } use constant text => <<'...'; top: a | b ... } { package R; use base 'Pegex::Tree'; sub got_b { my ($self, $got) = @_; [reverse @$got]; } } my $parser = Pegex::Parser->new( grammar => G->new, receiver => R->new, # debug => 1, ); my $result = $parser->parse('xyz'); is scalar(@$result), 2, 'Got array of size 2'; is $result->[0], 'xyz', 'xyz is first'; is $result->[1], 'aaa', 'aaa is second'; done_testing; Pegex-0.60/t/testml-error.t0000644000175000017500000000061612462227077014310 0ustar ingyingy# DO NOT EDIT # # This file was generated by TestML::Setup (0.50) # # > perl -MTestML::Setup -e setup test/testml.yaml use strict; use lib (-e 't' ? 't' : 'test'), 'inc'; use File::Spec; use TestML; use TestML::Compiler::Lite; use TestMLBridge; TestML->new( testml => File::Spec->catfile(qw{testml error.tml}), bridge => 'TestMLBridge', compiler => 'TestML::Compiler::Lite', )->run; Pegex-0.60/t/testml.yaml0000644000175000017500000000067412462227077013664 0ustar ingyingysource_testml_dir: ../ext/pegex-tml local_testml_dir: ./testml test_file_prefix: testml- test_file_template: | [% testml_setup_comment -%] use strict; use lib (-e 't' ? 't' : 'test'), 'inc'; use File::Spec; use TestML; use TestML::Compiler::Lite; use TestMLBridge; TestML->new( testml => File::Spec->catfile(qw{[% path.join(' ') %]}), bridge => 'TestMLBridge', compiler => 'TestML::Compiler::Lite', )->run; Pegex-0.60/xt/0000755000175000017500000000000012462227077011651 5ustar ingyingyPegex-0.60/xt/TestDevelPegex.pm0000644000175000017500000000450512462227077015103 0ustar ingyingypackage TestDevelPegex; use strict; use warnings; use File::Spec; use Test::More; use IO::All; use Time::HiRes qw(gettimeofday tv_interval); my $time; use base 'Exporter'; our @EXPORT = qw( pegex_parser pegex_parser_ast slurp test_grammar_paths gettimeofday tv_interval XXX ); use constant TEST_GRAMMARS => [ '../pegex-pgx/pegex.pgx', '../testml-pgx/testml.pgx', '../json-pgx/json.pgx', '../yaml-pgx/yaml.pgx', '../kwim-pgx/kwim.pgx', '../drinkup/share/drinkup.pgx', # '../SQL-Parser-Neo/pegex/pg-lexer.pgx', '../SQL-Parser-Neo/pegex/Pg.pgx', ]; sub pegex_parser { require Pegex::Parser; require Pegex::Pegex::Grammar; require Pegex::Tree::Wrap; my ($grammar) = @_; return Pegex::Parser->new( grammar => Pegex::Pegex::Grammar->new, receiver => Pegex::Tree::Wrap->new, ); } sub pegex_parser_ast { require Pegex::Parser; require Pegex::Pegex::Grammar; require Pegex::Pegex::AST; my ($grammar) = @_; return Pegex::Parser->new( grammar => Pegex::Pegex::Grammar->new, receiver => Pegex::Pegex::AST->new, ); } sub slurp { my ($path) = @_; return scalar io->file($path)->all; } sub test_grammar_paths { my @paths; for my $grammar_source (@{TEST_GRAMMARS()}) { my $grammar_file = check_grammar($grammar_source) or next; push @paths, $grammar_file; } plan skip_all => 'No local grammars found to test' unless @paths; return @paths; } #-----------------------------------------------------------------------------# sub check_grammar { my ($source) = @_; (my $file = $source) =~ s!.*/!!; my $xt = -e 'xt' ? 'xt' : File::Spec->catfile('test', 'devel'); my $path = File::Spec->catfile('.', $xt, 'grammars', $file); if (-e $source) { if (not -e $path) { diag "$path not found. Copying from $source\n"; copy_grammar($source, $path); } elsif (slurp($source) ne slurp($path)) { diag "$path is out of date. Copying from $source\n"; copy_grammar($source, $path); } } return -e $path ? $path : undef; } sub copy_grammar { my ($source, $target) = @_; return unless -e $source; io->file($target)->assert->print(slurp($source)); } END { done_testing; } 1; Pegex-0.60/xt/speed.t0000644000175000017500000000071412462227077013140 0ustar ingyingyuse strict; use warnings; use Test::More; use lib -e 'xt' ? 'xt' : 'test/devel'; use TestDevelPegex; for my $grammar (test_grammar_paths) { my $parser = pegex_parser; my $input = slurp($grammar); my $timer = [gettimeofday]; my $result = eval { $parser->parse($input) }; my $time = tv_interval($timer); if ($result) { pass "$grammar parses in $time seconds"; } else { fail "$grammar failed to parse $@"; } } Pegex-0.60/xt/compilers.t0000644000175000017500000000121212462227077014027 0ustar ingyingy# BEGIN { $TestML::Test::Differences = 1 } # BEGIN { $Pegex::Parser::Debug = 1 } # BEGIN { $Pegex::Bootstrap = 1 } use strict; use warnings; use Test::More; use lib -e 'xt' ? 'xt' : 'test/devel'; use TestDevelPegex; use Pegex::Bootstrap; use Pegex::Compiler; use YAML::XS; for my $grammar (test_grammar_paths) { my $expected = eval { Dump(Pegex::Bootstrap->new->parse(slurp($grammar))->tree); } or next; my $got = eval { Dump(Pegex::Bootstrap->new->parse(slurp($grammar))->tree); } or die "$grammar failed to compile: $@"; is $got, $expected, "Bootstrap compile matches normal compile for $grammar"; } Pegex-0.60/xt/grammars/0000755000175000017500000000000012462227077013462 5ustar ingyingyPegex-0.60/xt/grammars/swim.pgx0000644000175000017500000001007112462227077015160 0ustar ingyingy# Pegex grammar for the Swim markup language # # Copyright 2014. Ingy döt Net # %grammar swim %version 0.0.1 document: block-top* block-top: | block-blank | block-comment | line-comment | block-rule | block-head | block-code | block-pref | block-list | block-title | block-verse | block-para block-blank: line-blank block-comment: / HASH HASH HASH EOL ( (: ANY*? EOL)*? ) HASH HASH HASH EOL line-blank? / line-comment: / HASH SPACE? ( ANY*? ) EOL line-blank? / block-rule: / DASH{4} EOL line-blank? / block-head: / ( EQUAL{1,4} ) SPACE+ (: ( ANY+? ) SPACE+ EQUAL+ EOL | ( ANY+ EOL (: [^ WS ] ANY* EOL )* [^ WS ] ANY*? ) SPACE+ EQUAL+ EOL | ( ANY+ EOL (: [^ WS ] ANY* EOL )*) (= [ marker-block-start ] | EOL | EOS) ) line-blank? / block-code: / BACK BACK BACK / block-pref: / ( (: line-blank* SPACE SPACE ANY* EOL )+ ) line-blank? / block-pref: / ( (: line-blank* SPACE SPACE ANY* EOL )+ ) line-blank? / block-list: | block-list-bullet | block-list-number | block-list-data block-list-bullet: /( line-list-item-bullet (: line-list-item-bullet | line-blank | line-indented )* line-blank? )/ block-list-number: /( line-list-item-number (: line-list-item-number | line-blank | line-indented )* line-blank? )/ block-list-data: /( line-list-item-data (: line-list-item-data | line-blank | line-indented )* )/ line-list-item-bullet: / STAR SPACE ANY* EOL / line-list-item-number: / PLUS SPACE ANY* EOL / line-list-item-data: / DASH SPACE ANY* EOL / block-list-item: ( | block-blank | block-comment | line-comment | block-head | block-pref | block-list | block-title | block-verse | block-para )* line-indented: / SPACE SPACE ANY* EOL / block-title: / ( text-line ) EQUAL{3,} EOL (: line-blank ( text-line ) (= line-blank | EOS ) )? line-blank? / block-verse: / DOT EOL ( text-line+ ) line-blank? / block-para: / ( text-line+ ) line-blank? / text-markup: phrase-markup+ phrase-markup: | phrase-text | marker-escape | phrase-func | phrase-code | phrase-bold | phrase-emph | phrase-del | phrase-under | phrase-hyper | phrase-link | marker-next marker-escape: / BACK ( ANY ) / phrase-text: / ( (: (! [ marker-phrase-start ] |https? COLON ) ALL)+ ) / phrase-code: / marker-code ( [^ marker-code]*? ) marker-code / phrase-func: / marker-func-start ( [^ marker-func-end]+ ) marker-func-end / phrase-bold: / marker-bold (= NS [^ marker-bold]) / ( !marker-bold phrase-markup )+ marker-bold phrase-emph: / marker-emph (= NS [^ marker-emph]) / ( !marker-emph phrase-markup )+ marker-emph phrase-del: / marker-del (= NS) (! marker-del) / ( !marker-del phrase-markup )+ marker-del phrase-under: / marker-under (= NS) (! marker-under) / ( !marker-under phrase-markup )+ marker-under phrase-hyper: | phrase-hyper-named | phrase-hyper-explicit | phrase-hyper-implicit phrase-hyper-named: / DOUBLE ( [^ DOUBLE ]+ ) DOUBLE LSQUARE (https?: NS*? ) RSQUARE / phrase-hyper-explicit: / LSQUARE (https?: NS*? ) RSQUARE / phrase-hyper-implicit: /(https? COLON NS+)/ phrase-link: | phrase-link-named | phrase-link-plain phrase-link-named: / DOUBLE ( [^ DOUBLE ]+ ) DOUBLE LSQUARE ( NS*? ) RSQUARE / phrase-link-plain: / LSQUARE ( NS*? ) RSQUARE / marker-next: / ( ALL ) / text-line: / (: (! [ marker-block-start NL ] SPACE) ANY* NS ANY* (: EOL | EOS ) ) / line-blank: / (: SPACE* EOL ) / marker-block-start: / marker-pref marker-list marker-head marker-comment / marker-phrase-start: / marker-func-start marker-code marker-bold marker-emph marker-del marker-link marker-esc / marker-pref: / SPACE / marker-list: / STAR / marker-head: / EQUAL / marker-comment: / HASH / marker-func-start: / LANGLE / marker-func-end: / RANGLE / marker-code: / GRAVE / marker-bold: / STAR / marker-emph: / SLASH / marker-del: / DASH DASH / marker-under: / UNDER / marker-link: / DOUBLE LSQUARE / marker-esc: / BACK / # vim: set lisp sw=2: Pegex-0.60/xt/grammars/yaml.pgx0000644000175000017500000002004012462227077015140 0ustar ingyingy#------------------------------------------------------------------------------ # Pegex Grammar for YAML 1.2 # # This is a PEG (top-down) grammar for the YAML 1.2 language. It is in the # Pegex format, and can be used to construct a YAML parser in any language # where Pegex has been ported to. (Currently Perl, Ruby and JavaScript). # # Compared to the official YAML spec, this grammar should be much easier to # read and understand. It will also be fully documented, and will attempt to # have a test suite that exercises every rule path. # # The overall intent of this is to have one working grammar that backs up a # full YAML framework implementation in every programming language where YAML # is used. If this is acheived, then a bug in YAML can be fixed in one place, # for every language's implementaion. # # This grammar will go further than just parsing correct YAML. It will also # parse for common YAML errors, and try to report the most useful error # messages. #------------------------------------------------------------------------------ # Notes: # - Indentation will be done with indent / ondent / undent # - Need to check some rules against spec for accuracy. # - Make the grammar strict as possible until justified. # - Need to look for common errors in the grammar, and report them. # - Need to have tests for known errors. %grammar yaml %version 0.0.1 #------------------------------------------------------------------------------ # High Level Constructs #------------------------------------------------------------------------------ # A YAML Stream is the top level rule, and accounts for the entirety of the # text being parsed. Basically, a stream is a set of zero or more documents, # but there can be ignorable comments on either side of an explicitly marked # document. NOTE: Not yet dealing with directives. yaml-stream: ignore-line* ( yaml-document ignore-line* )* # A YAML Document is a single node of any kind. It may start with an optional # explicit head marker, and may be terminated with an optional explicit foot # marker. yaml-document: document-head? top-node # It is important to make sure we are on a line boundary here: ignore-line? document-foot? # A top level node can be quite a few distinct things: top-node: node-prefix? ( | node-alias | flow-mapping | flow-sequence | block-sequence | block-mapping | block-scalar ) ( EOL? ) #------------------------------------------------------------------------------ # Block Constructs #------------------------------------------------------------------------------ # This rule identifies all the block nodes: block-node: | block-sequence | block-mapping | block-scalar # A block sequence is an indented set of nodes each starting with a # dash+space: block-sequence: block-sequence-entry+ # TODO This needs to support and block-node: block-sequence-entry: / DASH SPACE+ block-scalar EOL / # A block mapping is an indented set of key / value pairs separated by # colon+space: block-mapping: block-indent block-mapping-pair+ block-undent # A block mapping pair is a key / value separated by colon+space: block-mapping-pair: block-ondent block-key block-mapping-separator block-value # block key scalar, has more limitations than a block value scalar. block-key: / block-scalar (= block-mapping-separator ) / # A block value can be any block or flow node: block-value: | flow-mapping | flow-sequence | block-node # A scalar in block form can take one of these 5 forms: block-scalar: / ( literal-scalar | folded-scalar | double-quoted-scalar | single-quoted-scalar | block-plain-scalar ) / #------------------------------------------------------------------------------ # Flow Constructs: #------------------------------------------------------------------------------ # A flow node can be any one of these 3 kinds: flow-node: | flow-sequence | flow-mapping | flow-scalar # A flow sequence is zero or more nodes, separated by commas, inside square # brackets. A trailing comma is allowed. flow-sequence: flow-sequence-start flow-sequence-entry* %% list-separator flow-sequence-end # A flow mapping is key / value pairs, separated by commas, inside curly # braces. A trailing comma is allowed. flow-mapping: flow-mapping-start flow-mapping-pair* %% list-separator flow-mapping-end # A flow scalar only has 3 basic forms: flow-scalar: / ( double-quoted-scalar | single-quoted-scalar | flow-plain-scalar ) / # A flow sequence entry is any flow node. This rule is an alias, and can maybe # go away later, but leaving this way now for clarity. flow-sequence-entry: flow-scalar # A flow mapping can have any node as key or value, but they must also be in # flow syntax. flow-mapping-pair: flow-node flow-mapping-separator flow-node # Starting and ending rules for flow collections: flow-sequence-start: /- '[' -/ flow-sequence-end: /- ']' -/ flow-mapping-start: /- '{' -/ flow-mapping-end: /- '}' -/ #------------------------------------------------------------------------------ # Scalar Constructs #------------------------------------------------------------------------------ # Literal scalar. # XXX Dummied out for now. literal-scalar: / '|' EOL 'XXX' / # Folded scalar. # XXX Dummied out for now. folded-scalar: / '>' EOL 'XXX' / # Double quoted scalar. # XXX Needs work. double-quoted-scalar: / DOUBLE [^ DOUBLE]* DOUBLE / # Single quoted scalar. # XXX Needs work. single-quoted-scalar: / SINGLE [^ SINGLE]* SINGLE / # Plain (unquoted) scalars can't start with syntax chars, and can't contain # colon+space. block-plain-scalar: / (! char-non-start) ANY+? (= COLON WS | EOL | EOS) / # Plain (unquoted) scalars in flow context are more restrictive than in block # context. flow-plain-scalar: / (! char-non-start) ANY+? (= [ chars-syntax COMMA ] | COLON SPACE | COMMA SPACE | EOL | EOS) / #------------------------------------------------------------------------------ # Other Constructs: #------------------------------------------------------------------------------ # block-indent: # This rule is written in code in the Grammar class. # block-ondent: # This rule is written in code in the Grammar class. # block-undent: # This rule is written in code in the Grammar class. # A YAML header is 3 dashes followed by spaces or a newline: document-head: / '---' (: SPACE+ | (?= EOL)) / # A YAML footer is 3 dots followed by a newline: document-foot: / '...' EOL / # A node prefix is a anchor and / or tag in any order. # XXX This construct is hard in PEG. Look for easier way. node-prefix: | node-anchor (SPACE+ node-tag)? | node-tag (SPACE+ node-anchor)? # An explicit node tag. # TODO This is very incomplete! node-tag: / BANG BANG? ( WORD+ ) / # A Node Anchor is a name for a node. Like '&this'. # TODO See spec for real definition. node-anchor: / '&' ( WORD+ ) / # A Node Alias is a reference to an anchored node. Like '*this'. node-alias: / '*' ( WORD+ ) / # Mapping key / value is always separated by ': ' (colon + space) flow-mapping-separator: / ':' (: SPACE+ | SPACE* (= EOL)) / block-mapping-separator: / ':' (: SPACE+ | SPACE* (= EOL)) / # List items separated by ',' (comma) # XXX Check spec if SPACE is needed list-separator: / ',' SPACE+ / # List of single chars that are YAML syntax (and thus must be avoided in # various contexts. chars-syntax: / AMP STAR HASH LCURLY RCURLY LSQUARE RSQUARE PERCENT / # YAML's Reserved Chars chars-reserved: / GRAVE AT / char-non-start: /[ chars-syntax chars-reserved ]/ #------------------------------------------------------------------------------ # Whitespace Rules: #------------------------------------------------------------------------------ # TODO Need to determine the - and + whitespace rule. # Ignore comments and whitespace until end of line. ignore-line: / ignore-text (= EOL) / # Ignorable text is spaces, tabs and a line comment. ignore-text: / (: comment-text | blank-text ) / # A '#' starts a comment until end of line. comment-text: / HASH ANY* / # Spaces and tabs. blank-text: / BLANK* / # Vim Helpers, until we get `pegex.vim` mode. # vim: set lisp sw=2: Pegex-0.60/xt/grammars/testml.pgx0000644000175000017500000000660112462227077015515 0ustar ingyingy%grammar testml %version 0.0.1 %include atom testml-document: code-section data-section? # General Tokens escape: / [0nt] / line: / ANY* EOL / blanks: / BLANK+ / blank-line: / BLANK* EOL / comment: / '#' line / ws: /(: BLANK | EOL | comment )/ # Strings quoted-string: | single-quoted-string | double-quoted-string single-quoted-string: /(: SINGLE ((: [^ BREAK BACK SINGLE ] | BACK SINGLE | BACK BACK )*?) SINGLE )/ double-quoted-string: /(: DOUBLE ((: [^ BREAK BACK DOUBLE] | BACK DOUBLE | BACK BACK | BACK escape )*?) DOUBLE )/ unquoted-string: /( [^ BLANKS BREAK HASH] (: [^ BREAK HASH]* [^ BLANKS BREAK HASH] )? )/ number: / ( DIGIT+ ) / # TestML Code Section code-section: ( | + | assignment-statement | code-statement )* assignment-statement: variable-name / WS+ '=' WS+ / code-expression ending variable-name: /( ALPHA WORD* )/ code-statement: code-expression assertion-call? ending ending: /(: ';' | EOL )/ | =ending2 ending2: /- '}'/ code-expression: code-object call-call* call-call: !assertion-call-test call-indicator code-object code-object: | function-object | point-object | string-object | number-object | call-object function-object: function-signature? function-start ( + | assignment-statement | code-statement )* /- '}'/ function-start: /- ( '{' ) -/ function-signature: /'(' -/ function-variables? /- ')'/ function-variables: function-variable+ % /- ',' -/ function-variable: /( ALPHA WORD* )/ point-object: /( '*' LOWER WORD* )/ string-object: quoted-string number-object: number call-object: call-name call-argument-list? call-name: user-call | core-call user-call: /( LOWER WORD* )/ core-call: /( UPPER WORD* )/ call-indicator: /(: '.' - | - '.' )/ call-argument-list: /'(' -/ call-argument* % /- ',' -/ /- ')'/ call-argument: code-expression assertion-call-test: / call-indicator (:EQ|OK|HAS) / assertion-call: | +assertion-eq | +assertion-ok | +assertion-has assertion-eq: | +assertion-operator-eq | +assertion-function-eq assertion-operator-eq: /+ '==' +/ code-expression assertion-function-eq: / call-indicator 'EQ(' / code-expression / ')' / assertion-ok: assertion-function-ok assertion-function-ok: / call-indicator ('OK') empty-parens? / assertion-has: +assertion-operator-has | +assertion-function-has assertion-operator-has: /+ '~~' +/ code-expression assertion-function-has: / call-indicator 'HAS(' / code-expression / ')' / empty-parens: /(: '(' - ')' )/ # TestML Data Section block-marker: '===' point-marker: '---' data-section: data-block* data-block: block-header .( blank-line | comment )* block-point* block-header: block-marker ( blanks block-label )? blank-line block-label: unquoted-string block-point: lines-point | phrase-point lines-point: point-marker blanks point-name blank-line point-lines point-lines: /( (: (! (: block-marker | point-marker ) SPACE WORD ) line )* )/ phrase-point: point-marker blanks point-name / COLON BLANK / point-phrase / EOL / /(: comment | blank-line )*/ point-name: /( user-point-name | core-point-name )/ user-point-name: / LOWER WORD* / core-point-name: / UPPER WORD* / point-phrase: unquoted-string # vim: sw=2 lisp: Pegex-0.60/xt/grammars/json.pgx0000644000175000017500000000223112462227077015151 0ustar ingyingy# A simple grammar for the simple JSON data language. # For parser implementations that use this grammar, see: # * https://github.com/ingydotnet/pegex-json-pm %grammar json %version 0.0.1 %include pegex-atoms json: - value - object: /- '{' -/ pair* % /- ',' -/ /- '}' -/ pair: string /- ':' -/ value array: /- '[' -/ value* % /- ',' -/ /- ']' -/ value: | string | number | object | array | true | false | null # string and number are interpretations of http://www.json.org/ string: / DOUBLE ( (: BACK (: # Backslash escapes [ DOUBLE # Double Quote BACK # Back Slash SLASH # Foreward Slash 'b' # Back Space 'f' # Form Feed 'n' # New Line 'r' # Carriage Return 't' # Horizontal Tab ] | 'u' HEX{4} # Unicode octet pair ) | [^ DOUBLE CONTROLS ] # Anything else )* ) DOUBLE / number: /( DASH? (: '0' | [1-9] DIGIT* ) (: DOT DIGIT* )? (: [eE] [ DASH PLUS ]? DIGIT+ )? )/ true: 'true' false: 'false' null: 'null' Pegex-0.60/xt/grammars/eyapp2pegex.pgx0000644000175000017500000000333512462227077016437 0ustar ingyingy# Main rule eyapp: +head +body +tail # "Atoms" E_CODE : /( (: [^ ]+ | # Most stuff (?1) # Nesting braces )* )/ E_PERCODE : / ~ / # bit of a hack E_BEGINCODE: / begin ~ ~ / E_IDENT : / ( (: | ) * ) ~ / E_LABEL : / + ~ / E_LITERAL : / ([^ ]+) ~ / E_NAME : / name ~ / E_nameident: / (: ~ ~ )? / E_OPTION : / ~ / E_PLUS : / ~ / E_PREC : / prec ~ / E_PLUS : / ~ / E_section : / ~ ~ / # Common rules: percode: E_PERCODE symbol: +literal | +token literal: E_LITERAL token: E_IDENT # Head section: head: decl* E_section decl: ( / [ ^ ]* / | +percode ) wsc # Rule section body: rules* E_section rules: +comments ~ +lhs +rhs ~ ~ lhs: / ~ ~ / rhs: + % / ~ ~ / rule: optname? ~ rhselt* ~ ( prec code? )? rhselt: symbol | +code | +comment | .( E_PREC token ) ### (not used in Pg.eyp) #| ( optname? rhseltwithid* ) #| ( rhselt ( # E_STAR # | E_OPTION # | E_PLUS # | ( ( E_STAR | E_PLUS ) symbol ) #) ) optname: E_NAME ( ( E_IDENT E_LABEL? ) | E_LABEL ) # (not used in Pg.eyp) prec: E_PREC ~ symbol code: E_CODE | E_BEGINCODE # Tail section: tail: / ( + ) / # whitespace wsc: / (: | )* / comment_null: / *? / comment: / ~ ( ) ~ / comments: comment*Pegex-0.60/xt/grammars/kwim.pgx0000644000175000017500000001007112462227077015150 0ustar ingyingy# Pegex grammar for the Kwim markup language # # Copyright 2014. Ingy döt Net # %grammar kwim %version 0.0.1 document: block-top* block-top: | block-blank | block-comment | line-comment | block-rule | block-head | block-code | block-pref | block-list | block-title | block-verse | block-para block-blank: line-blank block-comment: / HASH HASH HASH EOL ( (: ANY*? EOL)*? ) HASH HASH HASH EOL line-blank? / line-comment: / HASH SPACE? ( ANY*? ) EOL line-blank? / block-rule: / DASH{4} EOL line-blank? / block-head: / ( EQUAL{1,4} ) SPACE+ (: ( ANY+? ) SPACE+ EQUAL+ EOL | ( ANY+ EOL (: [^ WS ] ANY* EOL )* [^ WS ] ANY*? ) SPACE+ EQUAL+ EOL | ( ANY+ EOL (: [^ WS ] ANY* EOL )*) (= [ marker-block-start ] | EOL | EOS) ) line-blank? / block-code: / BACK BACK BACK / block-pref: / ( (: line-blank* SPACE SPACE ANY* EOL )+ ) line-blank? / block-pref: / ( (: line-blank* SPACE SPACE ANY* EOL )+ ) line-blank? / block-list: | block-list-bullet | block-list-number | block-list-data block-list-bullet: /( line-list-item-bullet (: line-list-item-bullet | line-blank | line-indented )* line-blank? )/ block-list-number: /( line-list-item-number (: line-list-item-number | line-blank | line-indented )* line-blank? )/ block-list-data: /( line-list-item-data (: line-list-item-data | line-blank | line-indented )* )/ line-list-item-bullet: / STAR SPACE ANY* EOL / line-list-item-number: / PLUS SPACE ANY* EOL / line-list-item-data: / DASH SPACE ANY* EOL / block-list-item: ( | block-blank | block-comment | line-comment | block-head | block-pref | block-list | block-title | block-verse | block-para )* line-indented: / SPACE SPACE ANY* EOL / block-title: / ( text-line ) EQUAL{3,} EOL (: line-blank ( text-line ) (= line-blank | EOS ) )? line-blank? / block-verse: / DOT EOL ( text-line+ ) line-blank? / block-para: / ( text-line+ ) line-blank? / text-markup: phrase-markup+ phrase-markup: | phrase-text | marker-escape | phrase-func | phrase-code | phrase-bold | phrase-emph | phrase-del | phrase-under | phrase-hyper | phrase-link | marker-next marker-escape: / BACK ( ANY ) / phrase-text: / ( (: (! [ marker-phrase-start ] |https? COLON ) ALL)+ ) / phrase-code: / marker-code ( [^ marker-code]*? ) marker-code / phrase-func: / marker-func-start ( [^ marker-func-end]+ ) marker-func-end / phrase-bold: / marker-bold (= NS [^ marker-bold]) / ( !marker-bold phrase-markup )+ marker-bold phrase-emph: / marker-emph (= NS [^ marker-emph]) / ( !marker-emph phrase-markup )+ marker-emph phrase-del: / marker-del (= NS) (! marker-del) / ( !marker-del phrase-markup )+ marker-del phrase-under: / marker-under (= NS) (! marker-under) / ( !marker-under phrase-markup )+ marker-under phrase-hyper: | phrase-hyper-named | phrase-hyper-explicit | phrase-hyper-implicit phrase-hyper-named: / DOUBLE ( [^ DOUBLE ]+ ) DOUBLE LSQUARE (https?: NS*? ) RSQUARE / phrase-hyper-explicit: / LSQUARE (https?: NS*? ) RSQUARE / phrase-hyper-implicit: /(https? COLON NS+)/ phrase-link: | phrase-link-named | phrase-link-plain phrase-link-named: / DOUBLE ( [^ DOUBLE ]+ ) DOUBLE LSQUARE ( NS*? ) RSQUARE / phrase-link-plain: / LSQUARE ( NS*? ) RSQUARE / marker-next: / ( ALL ) / text-line: / (: (! [ marker-block-start NL ] SPACE) ANY* NS ANY* (: EOL | EOS ) ) / line-blank: / (: SPACE* EOL ) / marker-block-start: / marker-pref marker-list marker-head marker-comment / marker-phrase-start: / marker-func-start marker-code marker-bold marker-emph marker-del marker-link marker-esc / marker-pref: / SPACE / marker-list: / STAR / marker-head: / EQUAL / marker-comment: / HASH / marker-func-start: / LANGLE / marker-func-end: / RANGLE / marker-code: / GRAVE / marker-bold: / STAR / marker-emph: / SLASH / marker-del: / DASH DASH / marker-under: / UNDER / marker-link: / DOUBLE LSQUARE / marker-esc: / BACK / # vim: set lisp sw=2: Pegex-0.60/xt/grammars/pg-lexer.pgx0000644000175000017500000011405212462227077015730 0ustar ingyingy### Based and converted from PostgreSQL's src/backend/parser/scan.l and ### ### src/interfaces/ecpg/preproc/parser.c ### ### Some comments retained from those sources, marked as #*. ### ### FIXME: Rename Atoms to use C_* syntax ### L_SPACE : /[ \t\n\r\f]/ L_HORIZ_SPACE : /[ \t\f]/ L_NEWLINE : /[\n\r]/ L_NON_NEWLINE : /[^\n\r]/ L_COMMENT : / -- * / L_EXT_COMMENT : /( (: (?> [^ ]+ ) | # Most stuff (no backtracking...) [^ ]+ | # Slash without a star [^ ]+ | # Star without a slash (?1) # Nesting comments )* )/ ws : / (: + | | | ) / ### NOTE ### ### SQL and Unicode have a bit of a shakey co-existence. SQL was designed ### with English phrases in mind, along with English digits and identifiers. ### Thus, Unicode isn't allowed everywhere, so that proper detection between ### ASCII and Unicode can be achieved. ### However, Perl can already properly detect Unicode naturally. So, instead ### of allowing strictest ANSI/ISO SQL in this case, we'll allow identifiers ### to be in UTF-8 without the need for the U& notation. L_IDENT_FIRST : / [ \p{Alphabetic} \x80-\xFF ] / L_IDENT_REST : / [ \p{Alnum} \x80-\xFF ] / L_IDENTIFIER : / * / #* We use exclusive states for quoted strings, extended comments, #* and to eliminate parsing troubles for numeric strings. #* Exclusive states: #* bit string literal #* extended C-style comments #* delimited identifiers (double-quoted identifiers) #* hexadecimal numeric string #* standard quoted strings #* extended quoted strings (support backslash escape sequences) #* $foo$ quoted strings #* quoted identifier with Unicode escapes #* quoted string with Unicode escapes #* Unicode surrogate pair in extended quoted string ### Okay, screw all of this flex hackery. ### This... is... PERL! *kicks flex down the well* ### Here's how it's going to work: I declare a RE that grabs everything, ### and it f'ing works! Fin. ### FIXME: Include various checks to produce errors on unterminated strings. #* SQL requires at least one newline in the whitespace separating #* string literals that are to be concatenated. Silly, but who are we #* to argue? Note that {whitespace_with_newline} should not have * after #* it, whereas {whitespace} should generally have a * after it... L_WHITESPACE_WITH_NEWLINE : / * * / L_HORIZ_WHITESPACE : / (: | ) / L_SPECIAL_WHITESPACE : / (: + | ) / L_QUOTECONTINUE : / / L_XBFULL : / [bB] ( [01]* ) / L_XHFULL : / [xX] ( * ) / L_XNFULL : / [nN] / =L_XQFULL # treat like a NCHAR keyword and not a full string L_XQFULL : / ( (: (> [^ ]+ ) | # Most stuff (no backtracking...) | # Double single-quotes # SQL-style concat (see above) )* ) / L_XEFULL : / [eE] ( (: (> [^ ]+ ) | # Most stuff (no backtracking...) | # Escaped quotes (which are technically "insecure", but we'll take them anyway) | # Double single-quotes [^ ]+ | # Any other escaped character # SQL-style concat (see above) )* ) / ### Ha! Perl REs can even process this one all in one bite, thanks to backreferences... L_XDOLQFULL : / ( ? ) ( [^ ]* ) \g1 / L_XDFULL : / ( [^ ]+ ) / ### Unicode escapes ### L_UESCAPE : / (i: UESCAPE ~ ( [^] ) ) / L_XUIFULL : / [uU] ~ / L_XUSFULL : / [uU] ~ / #* "self" is the set of chars that should be returned as single-character #* tokens. "op_chars" is the set of chars that can make up "Op" tokens, #* which can be one or more characters long (but if a single-char token #* appears in the "self" set, it is not to be returned as an Op). Note #* that the sets overlap, but each has some chars that are not in the other. #* #* If you change either set, adjust the character lists appearing in the #* rule for "operator"! L_OP_CHARS : / [ ] / L_TYPECAST : / / L_DOT_DOT : / / L_COLON_EQUALS : / / L_SELF : / [ ] / L_NON_MATH : / [ ] / #* we no longer allow unary minus in numbers. #* instead we pass it separately to parser. there it gets #* coerced via doNegate() -- Leon aug 20 1999 ### We aren't allowing non-English here, else all coder's brains would spontaneously explode ### at the prospect of making a Tibetian digit work with functions like int()... #L_DIGIT # Atom already defined L_INTEGER : / + / L_DECIMAL : / (: * + | + * (! ) ) / L_REAL : / (: | ) [Ee] [-+]? + / L_PARAM : / ( ) / #* %% ########################### ### Lexer token returns ### ########################### # At this point in the lexer, we have the tokens that the parser would process. # Obviously, a parser would expect its separation to processed like whitespace. # However, unlike a standard lexer/parser pair, a Pegex rule like "CREATE TABLE" # implies no whitespace in-between the keywords. Thus, we add ~ checks to # each of the "parser tokens". ### Constants ### BCONST : L_XBFULL ~ XCONST : L_XHFULL ~ SCONST : ( L_XQFULL | L_XEFULL | L_XUSFULL | L_XDOLQFULL ) ~ ICONST : / () / ~ FCONST : / (|) / ~ PARAM : L_PARAM ~ ### Operators ### #* Check for embedded slash-star or dash-dash; those #* are comment starts, so operator must stop there. #* Note that slash-star or dash-dash at the first #* character will match a prior rule, not this one. L_Op : / ( +? (?= | ) | {2,} | ) / #Op: ---code rule--- TYPECAST : L_TYPECAST ~ DOT_DOT : L_DOT_DOT ~ COLON_EQUALS : L_COLON_EQUALS ~ ### Self (single-character) tokens ### # These are normally defined as 'X' in a traditional parser, but no such syntax # exists in Pegex. We can't just use atoms or /X/ syntax, because we have to # include token separation. Thus, all of the tokens are defined here as P_* # tokens, and use a naming scheme that matches the Pegex atoms. # NOTE: We have to double-check that it doesn't end up as an Op before the # assignment. Self does take priority, but the Op rule will already invalidate # itself if it realizes that it's going to be a single-character L_SELF. P_COMMA : !Op C_COMMA ~; P_SEMI : !Op C_SEMI ~; P_COLON : !Op C_COLON ~; P_DOT : !Op C_DOT ~; P_PLUS : !Op C_PLUS ~; P_MINUS : !Op C_MINUS ~; P_SLASH : !Op C_SLASH ~; P_STAR : !Op C_STAR ~; P_CARET : !Op C_CARET ~; P_EQUAL : !Op C_EQUAL ~; P_PERCENT : !Op C_PERCENT ~; P_LPAREN : !Op C_LPAREN ~; P_RPAREN : !Op C_RPAREN ~; P_LSQUARE : !Op C_LSQUARE ~; P_RSQUARE : !Op C_RSQUARE ~; P_LANGLE : !Op C_LANGLE ~; P_RANGLE : !Op C_RANGLE ~; ### Keywords ### IDENT : ( L_XDFULL | L_XUIFULL | ( !L_KEYWORD L_IDENTIFIER ) ) ~ L_KEYWORD : / (i: ABORT|ABSOLUTE|ACCESS|ACTION|ADD|ADMIN|AFTER|AGGREGATE|ALL|ALSO|ALTER|ALWAYS|ANALYSE|ANALYZE|AND|ANY|ARRAY|AS|ASC|ASSERTION|ASSIGNMENT|ASYMMETRIC|AT|ATTRIBUTE|AUTHORIZATION| BACKWARD|BEFORE|BEGIN|BETWEEN|BIGINT|BINARY|BIT|BOOLEAN|BOTH|BY| CACHE|CALLED|CASCADE|CASCADED|CASE|CAST|CATALOG|CHAIN|CHAR|CHARACTER|CHARACTERISTICS|CHECK|CHECKPOINT|CLASS|CLOSE|CLUSTER|COALESCE|COLLATE|COLLATION|COLUMN|COMMENT|COMMENTS|COMMIT|COMMITTED|CONCURRENTLY|CONFIGURATION|CONNECTION|CONSTRAINT|CONSTRAINTS|CONTENT|CONTINUE|CONVERSION|COPY|COST|CREATE|CROSS|CSV|CURRENT|CURRENT_CATALOG|CURRENT_DATE|CURRENT_ROLE|CURRENT_SCHEMA|CURRENT_TIME|CURRENT_TIMESTAMP|CURRENT_USER|CURSOR|CYCLE| DATA|DATABASE|DAY|DEALLOCATE|DEC|DECIMAL|DECLARE|DEFAULT|DEFAULTS|DEFERRABLE|DEFERRED|DEFINER|DELETE|DELIMITER|DELIMITERS|DESC|DICTIONARY|DISABLE|DISCARD|DISTINCT|DO|DOCUMENT|DOMAIN|DOUBLE|DROP| EACH|ELSE|ENABLE|ENCODING|ENCRYPTED|END|ENUM|ESCAPE|EVENT|EXCEPT|EXCLUDE|EXCLUDING|EXCLUSIVE|EXECUTE|EXISTS|EXPLAIN|EXTENSION|EXTERNAL|EXTRACT| FALSE|FAMILY|FETCH|FIRST|FLOAT|FOLLOWING|FOR|FORCE|FOREIGN|FORWARD|FREEZE|FROM|FULL|FUNCTION|FUNCTIONS| GLOBAL|GRANT|GRANTED|GREATEST|GROUP| HANDLER|HAVING|HEADER|HOLD|HOUR| IDENTITY|IF|ILIKE|IMMEDIATE|IMMUTABLE|IMPLICIT|IN|INCLUDING|INCREMENT|INDEX|INDEXES|INHERIT|INHERITS|INITIALLY|INLINE|INNER|INOUT|INPUT|INSENSITIVE|INSERT|INSTEAD|INT|INTEGER|INTERSECT|INTERVAL|INTO|INVOKER|IS|ISNULL|ISOLATION| JOIN| KEY| LABEL|LANGUAGE|LARGE|LAST|LATERAL|LC_COLLATE|LC_CTYPE|LEADING|LEAKPROOF|LEAST|LEFT|LEVEL|LIKE|LIMIT|LISTEN|LOAD|LOCAL|LOCALTIME|LOCALTIMESTAMP|LOCATION|LOCK| MAPPING|MATCH|MAXVALUE|MINUTE|MINVALUE|MODE|MONTH|MOVE| NAME|NAMES|NATIONAL|NATURAL|NCHAR|NEXT|NO|NONE|NOT|NOTHING|NOTIFY|NOTNULL|NOWAIT|NULL|NULLIF|NULLS|NUMERIC| OBJECT|OF|OFF|OFFSET|OIDS|ON|ONLY|OPERATOR|OPTION|OPTIONS|OR|ORDER|OUT|OUTER|OVER|OVERLAPS|OVERLAY|OWNED|OWNER| PARSER|PARTIAL|PARTITION|PASSING|PASSWORD|PLACING|PLANS|POSITION|PRECEDING|PRECISION|PREPARE|PREPARED|PRESERVE|PRIMARY|PRIOR|PRIVILEGES|PROCEDURAL|PROCEDURE| QUOTE| RANGE|READ|REAL|REASSIGN|RECHECK|RECURSIVE|REF|REFERENCES|REINDEX|RELATIVE|RELEASE|RENAME|REPEATABLE|REPLACE|REPLICA|RESET|RESTART|RESTRICT|RETURNING|RETURNS|REVOKE|RIGHT|ROLE|ROLLBACK|ROW|ROWS|RULE| SAVEPOINT|SCHEMA|SCROLL|SEARCH|SECOND|SECURITY|SELECT|SEQUENCE|SEQUENCES|SERIALIZABLE|SERVER|SESSION|SESSION_USER|SET|SETOF|SHARE|SHOW|SIMILAR|SIMPLE|SMALLINT|SNAPSHOT|SOME|STABLE|STANDALONE|START|STATEMENT|STATISTICS|STDIN|STDOUT|STORAGE|STRICT|STRIP|SUBSTRING|SYMMETRIC|SYSID|SYSTEM| TABLE|TABLES|TABLESPACE|TEMP|TEMPLATE|TEMPORARY|TEXT|THEN|TIME|TIMESTAMP|TO|TRAILING|TRANSACTION|TREAT|TRIGGER|TRIM|TRUE|TRUNCATE|TRUSTED|TYPE|TYPES| UNBOUNDED|UNCOMMITTED|UNENCRYPTED|UNION|UNIQUE|UNKNOWN|UNLISTEN|UNLOGGED|UNTIL|UPDATE|USER|USING| VACUUM|VALID|VALIDATE|VALIDATOR|VALUE|VALUES|VARCHAR|VARIADIC|VARYING|VERBOSE|VERSION|VIEW|VOLATILE| WHEN|WHERE|WHITESPACE|WINDOW|WITH|WITHOUT|WORK|WRAPPER|WRITE| XML|XMLATTRIBUTES|XMLCONCAT|XMLELEMENT|XMLEXISTS|XMLFOREST|XMLPARSE|XMLPI|XMLROOT|XMLSERIALIZE| YEAR|YES| ZONE ) (! ) / ABORT : / (i: ABORT ) (! ) ~ / ABSOLUTE : / (i: ABSOLUTE ) (! ) ~ / ACCESS : / (i: ACCESS ) (! ) ~ / ACTION : / (i: ACTION ) (! ) ~ / ADD : / (i: ADD ) (! ) ~ / ADMIN : / (i: ADMIN ) (! ) ~ / AFTER : / (i: AFTER ) (! ) ~ / AGGREGATE : / (i: AGGREGATE ) (! ) ~ / ALL : / (i: ALL ) (! ) ~ / ALSO : / (i: ALSO ) (! ) ~ / ALTER : / (i: ALTER ) (! ) ~ / ALWAYS : / (i: ALWAYS ) (! ) ~ / ANALYZE : / (i: ANALYZE|ANALYSE ) (! ) ~ / # just combine these for the "tokens" AND : / (i: AND ) (! ) ~ / ANY : / (i: ANY ) (! ) ~ / ARRAY : / (i: ARRAY ) (! ) ~ / AS : / (i: AS ) (! ) ~ / ASC : / (i: ASC ) (! ) ~ / ASSERTION : / (i: ASSERTION ) (! ) ~ / ASSIGNMENT : / (i: ASSIGNMENT ) (! ) ~ / ASYMMETRIC : / (i: ASYMMETRIC ) (! ) ~ / AT : / (i: AT ) (! ) ~ / ATTRIBUTE : / (i: ATTRIBUTE ) (! ) ~ / AUTHORIZATION : / (i: AUTHORIZATION ) (! ) ~ / BACKWARD : / (i: BACKWARD ) (! ) ~ / BEFORE : / (i: BEFORE ) (! ) ~ / BEGIN : / (i: BEGIN ) (! ) ~ / BETWEEN : / (i: BETWEEN ) (! ) ~ / BIGINT : / (i: BIGINT ) (! ) ~ / BINARY : / (i: BINARY ) (! ) ~ / BIT : / (i: BIT ) (! ) ~ / BOOLEAN : / (i: BOOLEAN ) (! ) ~ / BOTH : / (i: BOTH ) (! ) ~ / BY : / (i: BY ) (! ) ~ / CACHE : / (i: CACHE ) (! ) ~ / CALLED : / (i: CALLED ) (! ) ~ / CASCADE : / (i: CASCADE ) (! ) ~ / CASCADED : / (i: CASCADED ) (! ) ~ / CASE : / (i: CASE ) (! ) ~ / CAST : / (i: CAST ) (! ) ~ / CATALOG : / (i: CATALOG ) (! ) ~ / CHAIN : / (i: CHAIN ) (! ) ~ / CHAR : / (i: CHAR ) (! ) ~ / CHARACTER : / (i: CHARACTER ) (! ) ~ / CHARACTERISTICS : / (i: CHARACTERISTICS ) (! ) ~ / CHECK : / (i: CHECK ) (! ) ~ / CHECKPOINT : / (i: CHECKPOINT ) (! ) ~ / CLASS : / (i: CLASS ) (! ) ~ / CLOSE : / (i: CLOSE ) (! ) ~ / CLUSTER : / (i: CLUSTER ) (! ) ~ / COALESCE : / (i: COALESCE ) (! ) ~ / COLLATE : / (i: COLLATE ) (! ) ~ / COLLATION : / (i: COLLATION ) (! ) ~ / COLUMN : / (i: COLUMN ) (! ) ~ / COMMENT : / (i: COMMENT ) (! ) ~ / COMMENTS : / (i: COMMENTS ) (! ) ~ / COMMIT : / (i: COMMIT ) (! ) ~ / COMMITTED : / (i: COMMITTED ) (! ) ~ / CONCURRENTLY : / (i: CONCURRENTLY ) (! ) ~ / CONFIGURATION : / (i: CONFIGURATION ) (! ) ~ / CONNECTION : / (i: CONNECTION ) (! ) ~ / CONSTRAINT : / (i: CONSTRAINT ) (! ) ~ / CONSTRAINTS : / (i: CONSTRAINTS ) (! ) ~ / CONTENT : / (i: CONTENT ) (! ) ~ / CONTINUE : / (i: CONTINUE ) (! ) ~ / CONVERSION : / (i: CONVERSION ) (! ) ~ / COPY : / (i: COPY ) (! ) ~ / COST : / (i: COST ) (! ) ~ / CREATE : / (i: CREATE ) (! ) ~ / CROSS : / (i: CROSS ) (! ) ~ / CSV : / (i: CSV ) (! ) ~ / CURRENT : / (i: CURRENT ) (! ) ~ / CURRENT_CATALOG : / (i: CURRENT_CATALOG ) (! ) ~ / CURRENT_DATE : / (i: CURRENT_DATE ) (! ) ~ / CURRENT_ROLE : / (i: CURRENT_ROLE ) (! ) ~ / CURRENT_SCHEMA : / (i: CURRENT_SCHEMA ) (! ) ~ / CURRENT_TIME : / (i: CURRENT_TIME ) (! ) ~ / CURRENT_TIMESTAMP : / (i: CURRENT_TIMESTAMP ) (! ) ~ / CURRENT_USER : / (i: CURRENT_USER ) (! ) ~ / CURSOR : / (i: CURSOR ) (! ) ~ / CYCLE : / (i: CYCLE ) (! ) ~ / DATA : / (i: DATA ) (! ) ~ / DATABASE : / (i: DATABASE ) (! ) ~ / DAY : / (i: DAY ) (! ) ~ / DEALLOCATE : / (i: DEALLOCATE ) (! ) ~ / DEC : / (i: DEC ) (! ) ~ / DECIMAL : / (i: DECIMAL ) (! ) ~ / DECLARE : / (i: DECLARE ) (! ) ~ / DEFAULT : / (i: DEFAULT ) (! ) ~ / DEFAULTS : / (i: DEFAULTS ) (! ) ~ / DEFERRABLE : / (i: DEFERRABLE ) (! ) ~ / DEFERRED : / (i: DEFERRED ) (! ) ~ / DEFINER : / (i: DEFINER ) (! ) ~ / DELETE : / (i: DELETE ) (! ) ~ / DELIMITER : / (i: DELIMITER ) (! ) ~ / DELIMITERS : / (i: DELIMITERS ) (! ) ~ / DESC : / (i: DESC ) (! ) ~ / DICTIONARY : / (i: DICTIONARY ) (! ) ~ / DISABLE : / (i: DISABLE ) (! ) ~ / DISCARD : / (i: DISCARD ) (! ) ~ / DISTINCT : / (i: DISTINCT ) (! ) ~ / DO : / (i: DO ) (! ) ~ / DOCUMENT : / (i: DOCUMENT ) (! ) ~ / DOMAIN : / (i: DOMAIN ) (! ) ~ / DOUBLE : / (i: DOUBLE ) (! ) ~ / DROP : / (i: DROP ) (! ) ~ / EACH : / (i: EACH ) (! ) ~ / ELSE : / (i: ELSE ) (! ) ~ / ENABLE : / (i: ENABLE ) (! ) ~ / ENCODING : / (i: ENCODING ) (! ) ~ / ENCRYPTED : / (i: ENCRYPTED ) (! ) ~ / END : / (i: END ) (! ) ~ / ENUM : / (i: ENUM ) (! ) ~ / ESCAPE : / (i: ESCAPE ) (! ) ~ / EVENT : / (i: EVENT ) (! ) ~ / EXCEPT : / (i: EXCEPT ) (! ) ~ / EXCLUDE : / (i: EXCLUDE ) (! ) ~ / EXCLUDING : / (i: EXCLUDING ) (! ) ~ / EXCLUSIVE : / (i: EXCLUSIVE ) (! ) ~ / EXECUTE : / (i: EXECUTE ) (! ) ~ / EXISTS : / (i: EXISTS ) (! ) ~ / EXPLAIN : / (i: EXPLAIN ) (! ) ~ / EXTENSION : / (i: EXTENSION ) (! ) ~ / EXTERNAL : / (i: EXTERNAL ) (! ) ~ / EXTRACT : / (i: EXTRACT ) (! ) ~ / FALSE : / (i: FALSE ) (! ) ~ / FAMILY : / (i: FAMILY ) (! ) ~ / FETCH : / (i: FETCH ) (! ) ~ / FIRST : / (i: FIRST ) (! ) ~ / FLOAT : / (i: FLOAT ) (! ) ~ / FOLLOWING : / (i: FOLLOWING ) (! ) ~ / FOR : / (i: FOR ) (! ) ~ / FORCE : / (i: FORCE ) (! ) ~ / FOREIGN : / (i: FOREIGN ) (! ) ~ / FORWARD : / (i: FORWARD ) (! ) ~ / FREEZE : / (i: FREEZE ) (! ) ~ / FROM : / (i: FROM ) (! ) ~ / FULL : / (i: FULL ) (! ) ~ / FUNCTION : / (i: FUNCTION ) (! ) ~ / FUNCTIONS : / (i: FUNCTIONS ) (! ) ~ / GLOBAL : / (i: GLOBAL ) (! ) ~ / GRANT : / (i: GRANT ) (! ) ~ / GRANTED : / (i: GRANTED ) (! ) ~ / GREATEST : / (i: GREATEST ) (! ) ~ / GROUP : / (i: GROUP ) (! ) ~ / HANDLER : / (i: HANDLER ) (! ) ~ / HAVING : / (i: HAVING ) (! ) ~ / HEADER : / (i: HEADER ) (! ) ~ / HOLD : / (i: HOLD ) (! ) ~ / HOUR : / (i: HOUR ) (! ) ~ / IDENTITY : / (i: IDENTITY ) (! ) ~ / IF : / (i: IF ) (! ) ~ / ILIKE : / (i: ILIKE ) (! ) ~ / IMMEDIATE : / (i: IMMEDIATE ) (! ) ~ / IMMUTABLE : / (i: IMMUTABLE ) (! ) ~ / IMPLICIT : / (i: IMPLICIT ) (! ) ~ / IN : / (i: IN ) (! ) ~ / INCLUDING : / (i: INCLUDING ) (! ) ~ / INCREMENT : / (i: INCREMENT ) (! ) ~ / INDEX : / (i: INDEX ) (! ) ~ / INDEXES : / (i: INDEXES ) (! ) ~ / INHERIT : / (i: INHERIT ) (! ) ~ / INHERITS : / (i: INHERITS ) (! ) ~ / INITIALLY : / (i: INITIALLY ) (! ) ~ / INLINE : / (i: INLINE ) (! ) ~ / INNER : / (i: INNER ) (! ) ~ / INOUT : / (i: INOUT ) (! ) ~ / INPUT : / (i: INPUT ) (! ) ~ / INSENSITIVE : / (i: INSENSITIVE ) (! ) ~ / INSERT : / (i: INSERT ) (! ) ~ / INSTEAD : / (i: INSTEAD ) (! ) ~ / INT : / (i: INT ) (! ) ~ / INTEGER : / (i: INTEGER ) (! ) ~ / INTERSECT : / (i: INTERSECT ) (! ) ~ / INTERVAL : / (i: INTERVAL ) (! ) ~ / INTO : / (i: INTO ) (! ) ~ / INVOKER : / (i: INVOKER ) (! ) ~ / IS : / (i: IS ) (! ) ~ / ISNULL : / (i: ISNULL ) (! ) ~ / ISOLATION : / (i: ISOLATION ) (! ) ~ / JOIN : / (i: JOIN ) (! ) ~ / KEY : / (i: KEY ) (! ) ~ / LABEL : / (i: LABEL ) (! ) ~ / LANGUAGE : / (i: LANGUAGE ) (! ) ~ / LARGE : / (i: LARGE ) (! ) ~ / LAST : / (i: LAST ) (! ) ~ / LATERAL : / (i: LATERAL ) (! ) ~ / LC_COLLATE : / (i: LC_COLLATE ) (! ) ~ / LC_CTYPE : / (i: LC_CTYPE ) (! ) ~ / LEADING : / (i: LEADING ) (! ) ~ / LEAKPROOF : / (i: LEAKPROOF ) (! ) ~ / LEAST : / (i: LEAST ) (! ) ~ / LEFT : / (i: LEFT ) (! ) ~ / LEVEL : / (i: LEVEL ) (! ) ~ / LIKE : / (i: LIKE ) (! ) ~ / LIMIT : / (i: LIMIT ) (! ) ~ / LISTEN : / (i: LISTEN ) (! ) ~ / LOAD : / (i: LOAD ) (! ) ~ / LOCAL : / (i: LOCAL ) (! ) ~ / LOCALTIME : / (i: LOCALTIME ) (! ) ~ / LOCALTIMESTAMP : / (i: LOCALTIMESTAMP ) (! ) ~ / LOCATION : / (i: LOCATION ) (! ) ~ / LOCK : / (i: LOCK ) (! ) ~ / MAPPING : / (i: MAPPING ) (! ) ~ / MATCH : / (i: MATCH ) (! ) ~ / MAXVALUE : / (i: MAXVALUE ) (! ) ~ / MINUTE : / (i: MINUTE ) (! ) ~ / MINVALUE : / (i: MINVALUE ) (! ) ~ / MODE : / (i: MODE ) (! ) ~ / MONTH : / (i: MONTH ) (! ) ~ / MOVE : / (i: MOVE ) (! ) ~ / NAME : / (i: NAME ) (! ) ~ / NAMES : / (i: NAMES ) (! ) ~ / NATIONAL : / (i: NATIONAL ) (! ) ~ / NATURAL : / (i: NATURAL ) (! ) ~ / NCHAR : / | (i: NCHAR) (! ) ~ / NEXT : / (i: NEXT ) (! ) ~ / NO : / (i: NO ) (! ) ~ / NONE : / (i: NONE ) (! ) ~ / NOT : / (i: NOT ) (! ) ~ / NOTHING : / (i: NOTHING ) (! ) ~ / NOTIFY : / (i: NOTIFY ) (! ) ~ / NOTNULL : / (i: NOTNULL ) (! ) ~ / NOWAIT : / (i: NOWAIT ) (! ) ~ / NULL : / (i: NULL ) (! ) ~ / NULLIF : / (i: NULLIF ) (! ) ~ / NULLS : / (i: NULLS ) (! ) ~ / NUMERIC : / (i: NUMERIC ) (! ) ~ / OBJECT : / (i: OBJECT ) (! ) ~ / OF : / (i: OF ) (! ) ~ / OFF : / (i: OFF ) (! ) ~ / OFFSET : / (i: OFFSET ) (! ) ~ / OIDS : / (i: OIDS ) (! ) ~ / ON : / (i: ON ) (! ) ~ / ONLY : / (i: ONLY ) (! ) ~ / OPERATOR : / (i: OPERATOR ) (! ) ~ / OPTION : / (i: OPTION ) (! ) ~ / OPTIONS : / (i: OPTIONS ) (! ) ~ / OR : / (i: OR ) (! ) ~ / ORDER : / (i: ORDER ) (! ) ~ / OUT : / (i: OUT ) (! ) ~ / OUTER : / (i: OUTER ) (! ) ~ / OVER : / (i: OVER ) (! ) ~ / OVERLAPS : / (i: OVERLAPS ) (! ) ~ / OVERLAY : / (i: OVERLAY ) (! ) ~ / OWNED : / (i: OWNED ) (! ) ~ / OWNER : / (i: OWNER ) (! ) ~ / PARSER : / (i: PARSER ) (! ) ~ / PARTIAL : / (i: PARTIAL ) (! ) ~ / PARTITION : / (i: PARTITION ) (! ) ~ / PASSING : / (i: PASSING ) (! ) ~ / PASSWORD : / (i: PASSWORD ) (! ) ~ / PLACING : / (i: PLACING ) (! ) ~ / PLANS : / (i: PLANS ) (! ) ~ / POSITION : / (i: POSITION ) (! ) ~ / PRECEDING : / (i: PRECEDING ) (! ) ~ / PRECISION : / (i: PRECISION ) (! ) ~ / PREPARE : / (i: PREPARE ) (! ) ~ / PREPARED : / (i: PREPARED ) (! ) ~ / PRESERVE : / (i: PRESERVE ) (! ) ~ / PRIMARY : / (i: PRIMARY ) (! ) ~ / PRIOR : / (i: PRIOR ) (! ) ~ / PRIVILEGES : / (i: PRIVILEGES ) (! ) ~ / PROCEDURAL : / (i: PROCEDURAL ) (! ) ~ / PROCEDURE : / (i: PROCEDURE ) (! ) ~ / QUOTE : / (i: QUOTE ) (! ) ~ / RANGE : / (i: RANGE ) (! ) ~ / READ : / (i: READ ) (! ) ~ / REAL : / (i: REAL ) (! ) ~ / REASSIGN : / (i: REASSIGN ) (! ) ~ / RECHECK : / (i: RECHECK ) (! ) ~ / RECURSIVE : / (i: RECURSIVE ) (! ) ~ / REF : / (i: REF ) (! ) ~ / REFERENCES : / (i: REFERENCES ) (! ) ~ / REINDEX : / (i: REINDEX ) (! ) ~ / RELATIVE : / (i: RELATIVE ) (! ) ~ / RELEASE : / (i: RELEASE ) (! ) ~ / RENAME : / (i: RENAME ) (! ) ~ / REPEATABLE : / (i: REPEATABLE ) (! ) ~ / REPLACE : / (i: REPLACE ) (! ) ~ / REPLICA : / (i: REPLICA ) (! ) ~ / RESET : / (i: RESET ) (! ) ~ / RESTART : / (i: RESTART ) (! ) ~ / RESTRICT : / (i: RESTRICT ) (! ) ~ / RETURNING : / (i: RETURNING ) (! ) ~ / RETURNS : / (i: RETURNS ) (! ) ~ / REVOKE : / (i: REVOKE ) (! ) ~ / RIGHT : / (i: RIGHT ) (! ) ~ / ROLE : / (i: ROLE ) (! ) ~ / ROLLBACK : / (i: ROLLBACK ) (! ) ~ / ROW : / (i: ROW ) (! ) ~ / ROWS : / (i: ROWS ) (! ) ~ / RULE : / (i: RULE ) (! ) ~ / SAVEPOINT : / (i: SAVEPOINT ) (! ) ~ / SCHEMA : / (i: SCHEMA ) (! ) ~ / SCROLL : / (i: SCROLL ) (! ) ~ / SEARCH : / (i: SEARCH ) (! ) ~ / SECOND : / (i: SECOND ) (! ) ~ / SECURITY : / (i: SECURITY ) (! ) ~ / SELECT : / (i: SELECT ) (! ) ~ / SEQUENCE : / (i: SEQUENCE ) (! ) ~ / SEQUENCES : / (i: SEQUENCES ) (! ) ~ / SERIALIZABLE : / (i: SERIALIZABLE ) (! ) ~ / SERVER : / (i: SERVER ) (! ) ~ / SESSION : / (i: SESSION ) (! ) ~ / SESSION_USER : / (i: SESSION_USER ) (! ) ~ / SET : / (i: SET ) (! ) ~ / SETOF : / (i: SETOF ) (! ) ~ / SHARE : / (i: SHARE ) (! ) ~ / SHOW : / (i: SHOW ) (! ) ~ / SIMILAR : / (i: SIMILAR ) (! ) ~ / SIMPLE : / (i: SIMPLE ) (! ) ~ / SMALLINT : / (i: SMALLINT ) (! ) ~ / SNAPSHOT : / (i: SNAPSHOT ) (! ) ~ / SOME : / (i: SOME ) (! ) ~ / STABLE : / (i: STABLE ) (! ) ~ / STANDALONE : / (i: STANDALONE ) (! ) ~ / START : / (i: START ) (! ) ~ / STATEMENT : / (i: STATEMENT ) (! ) ~ / STATISTICS : / (i: STATISTICS ) (! ) ~ / STDIN : / (i: STDIN ) (! ) ~ / STDOUT : / (i: STDOUT ) (! ) ~ / STORAGE : / (i: STORAGE ) (! ) ~ / STRICT : / (i: STRICT ) (! ) ~ / STRIP : / (i: STRIP ) (! ) ~ / SUBSTRING : / (i: SUBSTRING ) (! ) ~ / SYMMETRIC : / (i: SYMMETRIC ) (! ) ~ / SYSID : / (i: SYSID ) (! ) ~ / SYSTEM : / (i: SYSTEM ) (! ) ~ / TABLE : / (i: TABLE ) (! ) ~ / TABLES : / (i: TABLES ) (! ) ~ / TABLESPACE : / (i: TABLESPACE ) (! ) ~ / TEMP : / (i: TEMP ) (! ) ~ / TEMPLATE : / (i: TEMPLATE ) (! ) ~ / TEMPORARY : / (i: TEMPORARY ) (! ) ~ / TEXT : / (i: TEXT ) (! ) ~ / THEN : / (i: THEN ) (! ) ~ / TIME : / (i: TIME ) (! ) ~ / TIMESTAMP : / (i: TIMESTAMP ) (! ) ~ / TO : / (i: TO ) (! ) ~ / TRAILING : / (i: TRAILING ) (! ) ~ / TRANSACTION : / (i: TRANSACTION ) (! ) ~ / TREAT : / (i: TREAT ) (! ) ~ / TRIGGER : / (i: TRIGGER ) (! ) ~ / TRIM : / (i: TRIM ) (! ) ~ / TRUE : / (i: TRUE ) (! ) ~ / TRUNCATE : / (i: TRUNCATE ) (! ) ~ / TRUSTED : / (i: TRUSTED ) (! ) ~ / TYPE : / (i: TYPE ) (! ) ~ / TYPES : / (i: TYPES ) (! ) ~ / UNBOUNDED : / (i: UNBOUNDED ) (! ) ~ / UNCOMMITTED : / (i: UNCOMMITTED ) (! ) ~ / UNENCRYPTED : / (i: UNENCRYPTED ) (! ) ~ / UNION : / (i: UNION ) (! ) ~ / UNIQUE : / (i: UNIQUE ) (! ) ~ / UNKNOWN : / (i: UNKNOWN ) (! ) ~ / UNLISTEN : / (i: UNLISTEN ) (! ) ~ / UNLOGGED : / (i: UNLOGGED ) (! ) ~ / UNTIL : / (i: UNTIL ) (! ) ~ / UPDATE : / (i: UPDATE ) (! ) ~ / USER : / (i: USER ) (! ) ~ / USING : / (i: USING ) (! ) ~ / VACUUM : / (i: VACUUM ) (! ) ~ / VALID : / (i: VALID ) (! ) ~ / VALIDATE : / (i: VALIDATE ) (! ) ~ / VALIDATOR : / (i: VALIDATOR ) (! ) ~ / VALUE : / (i: VALUE ) (! ) ~ / VALUES : / (i: VALUES ) (! ) ~ / VARCHAR : / (i: VARCHAR ) (! ) ~ / VARIADIC : / (i: VARIADIC ) (! ) ~ / VARYING : / (i: VARYING ) (! ) ~ / VERBOSE : / (i: VERBOSE ) (! ) ~ / VERSION : / (i: VERSION ) (! ) ~ / VIEW : / (i: VIEW ) (! ) ~ / VOLATILE : / (i: VOLATILE ) (! ) ~ / WHEN : / (i: WHEN ) (! ) ~ / WHERE : / (i: WHERE ) (! ) ~ / WHITESPACE : / (i: WHITESPACE ) (! ) ~ / WINDOW : / (i: WINDOW ) (! ) ~ / WITH : / (i: WITH ) (! ) ~ / WITHOUT : / (i: WITHOUT ) (! ) ~ / WORK : / (i: WORK ) (! ) ~ / WRAPPER : / (i: WRAPPER ) (! ) ~ / WRITE : / (i: WRITE ) (! ) ~ / XML : / (i: XML ) (! ) ~ / XMLATTRIBUTES : / (i: XMLATTRIBUTES ) (! ) ~ / XMLCONCAT : / (i: XMLCONCAT ) (! ) ~ / XMLELEMENT : / (i: XMLELEMENT ) (! ) ~ / XMLEXISTS : / (i: XMLEXISTS ) (! ) ~ / XMLFOREST : / (i: XMLFOREST ) (! ) ~ / XMLPARSE : / (i: XMLPARSE ) (! ) ~ / XMLPI : / (i: XMLPI ) (! ) ~ / XMLROOT : / (i: XMLROOT ) (! ) ~ / XMLSERIALIZE : / (i: XMLSERIALIZE ) (! ) ~ / YEAR : / (i: YEAR ) (! ) ~ / YES : / (i: YES ) (! ) ~ / ZONE : / (i: ZONE ) (! ) ~ / Pegex-0.60/xt/grammars/drinkup.pgx0000644000175000017500000000255312462227077015663 0ustar ingyingy%grammar drinkup %version 0.0.1 recipe: cocktail description ingredients instructions? footers cocktail: / (: ~ )? # Leading whitespace? ( *?) # Cocktail name. Capture it. ~ # Trailing whitespace / description: / (*?) # Description text + (= * ) # Not the ingredients / # Ingredient parts ingredients: ingredient+ ingredient: / * * / number / * / unit / * / name / * / note? # probably need to attempt to parse 1 1/2 or 1 + 1/2 # cribbed from json-pgx, minus the exponent version because if you use that # in a recipe you should change your units! number: /( (: 0 | [1-9] * )? (: + )? (: ~ ? ~ + ~ ~ + )? )/ ounce: /(?i:ounces? | oz)/ tablespoon: /(?i:tbsp|tablespoons?|(?-i:T\s))/ teaspoon: /(?i:tsp|teaspoons?|(?-i:t\s))/ dash: /(?i:dash(:es)?)/ unit: (|||)/(?i: ~ of)?/ name: / (+?) ~ (= ) / note: / * (+?) * / instructions: / (*?) # instruction text (= ) # Not the footers ~ / footers: metadata+ metadata: / () * (+?) ~ / footer_name: / [-]+ / Pegex-0.60/xt/grammars/vic.pgx0000644000175000017500000001126212462227077014765 0ustar ingyingy%grammar vic %version 0.2.1 #COPYRIGHT: 2014 Vikas N Kumar . All Rights Reserved # mcu-select is necessary. program: mcu-select header* statement* EOS header: pragmas | comment mcu-select: /'PIC' BLANK+ ( mcu-types ) line-ending/ mcu-types: / ALPHA DIGIT+ ALPHA DIGIT+ / # the uc names are like P16F690 or P16F84 etc. pragmas: 'pragma' + (variable | name) (pragma-expression | name)? line-ending pragma-expression: name / EQUAL -/ (number-units | number | string) - comment: /- HASH ANY* EOL/ | blank-line blank-line: /- EOL/ ## overriding what pegex is doing here _: / BLANK* EOL?/ __: / BLANK+ EOL?/ line-ending: /- SEMI - EOL?/ statement: | comment | instruction | expression | conditional-statement | assert-statement | block block: named-block named-block: name anonymous-block anonymous-block: start-block statement* end-block start-block: /- LCURLY - EOL?/ end-block: /- RCURLY - EOL?/ instruction: name values line-ending name: - identifier-without-keyword - values: value* % list-separator value: - (string | number-units | number | array-element | variable | named-block | modifier-constant | modifier-variable | validated-variable) - expression: (assign-expr | unary-expr | declaration) line-ending unary-lhs: - unary-operator - variable - unary-rhs: - variable - unary-operator - unary-expr: unary-lhs | unary-rhs complement: - complement-operator - rhs-expr - nested-expr-value: start-nested-expr rhs-expr end-nested-expr expr-value: - (number | array-element | variable | number-units | complement | modifier-variable | nested-expr-value) - rhs-expr: expr-value+ % rhs-operator assign-expr: - variable - assign-operator - rhs-expr declaration: - variable - / EQUAL / - (constant | modifier-constant) array-element: variable start-array rhs-expr end-array comparison: expr-value compare-operator expr-value single-conditional: comparison | expr-value nested-conditional: start-nested-expr single-conditional end-nested-expr any-conditional: single-conditional | nested-conditional conditional-predicate: - (anonymous-block) - ('else' - (anonymous-block | conditional-statement)*)? - single-conditional-subject: any-conditional+ % logic-operator nested-conditional-subject: start-nested-expr single-conditional-subject end-nested-expr conditional-subject: single-conditional-subject | nested-conditional-subject conditional-statement: - /('if' | 'while')/ - conditional-subject - conditional-predicate line-ending? # special case assert-value: - (validated-variable | variable | number) - assert-comparison: assert-value compare-operator assert-value assert-condition: assert-comparison assert-message: list-separator - string assert-statement: name assert-condition - assert-message? - line-ending complement-operator: /( TILDE | BANG )/ unary-operator: /( PLUS PLUS | DASH DASH)/ assign-operator: /([ PLUS DASH PERCENT CARET STAR PIPE AMP SLASH]? EQUAL)/ | shift-assign-operator compare-operator: /([ BANG EQUAL LANGLE RANGLE] EQUAL | (: LANGLE | RANGLE ))/ logic-operator: /([ AMP PIPE]{2})/ math-operator: /([ PLUS DASH STAR SLASH PERCENT])/ shift-operator: /( LANGLE LANGLE | RANGLE RANGLE)/ shift-assign-operator: /( LANGLE LANGLE EQUAL | RANGLE RANGLE EQUAL)/ bit-operator: /([ PIPE CARET AMP])/ rhs-operator: math-operator | bit-operator | shift-operator # parentheses start-nested-expr: /- LPAREN -/ end-nested-expr: /- RPAREN -/ start-array: /- LSQUARE -/ end-array: /- RSQUARE -/ string: single-quoted-string | double-quoted-string # most microcontrollers cannot do floating point math so ignore real numbers number-units: number - units # number handles both hex and non-hex values for ease of use number: /('0'[xX] HEX+ | '-'? DIGIT+)/ | boolean units: /([mu]?'s'|[kM]?'Hz' | '%')/ boolean: /('TRUE'|'FALSE'|'true'|'false'|'0'|'1')/ validated-variable: identifier-without-keyword modifier-variable: identifier-without-keyword - variable modifier-constant: identifier-without-keyword - constant variable: DOLLAR identifier constant: number-units | number | string | array array: start-array - array-element-type* % list-separator - end-array array-element-type: - (number-units | number | string) - identifier-without-keyword: /(! keyword)( ALPHA[ WORDS]*)/ identifier: /( ALPHA [ WORDS ]*)/ keyword: /'if'|'else'|'while'|'true'|'false'|'TRUE'|'FALSE'/ list-separator: - COMMA - comment? single_quoted_string: /(: SINGLE ((: [^ BREAK BACK SINGLE] | BACK SINGLE | BACK BACK )*?) SINGLE )/ double_quoted_string: /(: DOUBLE ((: [^ BREAK BACK DOUBLE] | BACK DOUBLE | BACK BACK | BACK escape )*?) DOUBLE )/ escape: / [0nt] / Pegex-0.60/xt/grammars/Pg.pgx0000644000175000017500000047765012462227077014573 0ustar ingyingy%grammar SQL::Transform::Parser::Pg # The target production for the whole parse. stmtblock: stmtmulti # the thrashing around here is to discard "empty" statements... stmtmulti: stmt+ % / ~ ~ / stmt: ( AlterEventTrigStmt | AlterDatabaseStmt | AlterDatabaseSetStmt | AlterDefaultPrivilegesStmt | AlterDomainStmt | AlterEnumStmt | AlterExtensionStmt | AlterExtensionContentsStmt | AlterFdwStmt | AlterForeignServerStmt | AlterForeignTableStmt | AlterFunctionStmt | AlterGroupStmt | AlterObjectSchemaStmt | AlterOwnerStmt | AlterSeqStmt | AlterTableStmt | AlterCompositeTypeStmt | AlterRoleSetStmt | AlterRoleStmt | AlterTSConfigurationStmt | AlterTSDictionaryStmt | AlterUserMappingStmt | AlterUserSetStmt | AlterUserStmt | AnalyzeStmt | CheckPointStmt | ClosePortalStmt | ClusterStmt | CommentStmt | ConstraintsSetStmt | CopyStmt | CreateAsStmt | CreateAssertStmt | CreateCastStmt | CreateConversionStmt | CreateDomainStmt | CreateExtensionStmt | CreateFdwStmt | CreateForeignServerStmt | CreateForeignTableStmt | CreateFunctionStmt | CreateGroupStmt | CreateOpClassStmt | CreateOpFamilyStmt | AlterOpFamilyStmt | CreatePLangStmt | CreateSchemaStmt | CreateSeqStmt | CreateStmt | CreateTableSpaceStmt | CreateTrigStmt | CreateEventTrigStmt | CreateRoleStmt | CreateUserStmt | CreateUserMappingStmt | CreatedbStmt | DeallocateStmt | DeclareCursorStmt | DefineStmt | DeleteStmt | DiscardStmt | DoStmt | DropAssertStmt | DropCastStmt | DropFdwStmt | DropForeignServerStmt | DropGroupStmt | DropOpClassStmt | DropOpFamilyStmt | DropOwnedStmt | DropPLangStmt | DropRuleStmt | DropStmt | DropTableSpaceStmt | DropTrigStmt | DropRoleStmt | DropUserStmt | DropUserMappingStmt | DropdbStmt | ExecuteStmt | ExplainStmt | FetchStmt | GrantStmt | GrantRoleStmt | IndexStmt | InsertStmt | ListenStmt | LoadStmt | LockStmt | NotifyStmt | PrepareStmt | ReassignOwnedStmt | ReindexStmt | RemoveAggrStmt | RemoveFuncStmt | RemoveOperStmt | RenameStmt | RevokeStmt | RevokeRoleStmt | RuleStmt | SecLabelStmt | SelectStmt | TransactionStmt | TruncateStmt | UnlistenStmt | UpdateStmt | VacuumStmt | VariableResetStmt | VariableSetStmt | VariableShowStmt | ViewStmt )? ############################################################################# # # Create a new Postgres DBMS role # ############################################################################# CreateRoleStmt: CREATE ROLE RoleId opt_with OptRoleList opt_with: WITH? # Options for CREATE ROLE and ALTER ROLE (also used by CREATE/ALTER USER # for backwards compatibility). Note: the only option required by SQL99 # is "WITH ADMIN name". OptRoleList: CreateOptRoleElem* % ~ AlterOptRoleList: AlterOptRoleElem* % ~ AlterOptRoleElem : AlterOptRoleElem_1 | AlterOptRoleElem_2 | AlterOptRoleElem_3 | AlterOptRoleElem_4 | AlterOptRoleElem_5 | AlterOptRoleElem_6 | AlterOptRoleElem_7 | AlterOptRoleElem_8 | AlterOptRoleElem_9 AlterOptRoleElem_1: PASSWORD Sconst AlterOptRoleElem_2: PASSWORD NULL AlterOptRoleElem_3: ENCRYPTED PASSWORD Sconst AlterOptRoleElem_4: UNENCRYPTED PASSWORD Sconst AlterOptRoleElem_5: INHERIT AlterOptRoleElem_6: CONNECTION LIMIT SignedIconst AlterOptRoleElem_7: VALID UNTIL Sconst AlterOptRoleElem_8: USER name_list AlterOptRoleElem_9: IDENT CreateOptRoleElem : CreateOptRoleElem_1 | CreateOptRoleElem_2 | CreateOptRoleElem_3 | CreateOptRoleElem_4 | CreateOptRoleElem_5 | CreateOptRoleElem_6 CreateOptRoleElem_1: AlterOptRoleElem CreateOptRoleElem_2: SYSID Iconst CreateOptRoleElem_3: ADMIN name_list CreateOptRoleElem_4: ROLE name_list CreateOptRoleElem_5: IN ROLE name_list CreateOptRoleElem_6: IN GROUP name_list ############################################################################# # # Create a new Postgres DBMS user (role with implied login ability) # ############################################################################# CreateUserStmt: CREATE USER RoleId opt_with OptRoleList ############################################################################# # # Alter a postgresql DBMS role # ############################################################################# AlterRoleStmt: ALTER ROLE RoleId opt_with AlterOptRoleList opt_in_database: ( IN DATABASE database_name )? AlterRoleSetStmt: ALTER ROLE RoleId opt_in_database SetResetClause ############################################################################# # # Alter a postgresql DBMS user # ############################################################################# AlterUserStmt: ALTER USER RoleId opt_with AlterOptRoleList AlterUserSetStmt: ALTER USER RoleId SetResetClause ############################################################################# # # Drop a postgresql DBMS role # # XXX Ideally this would have CASCADE/RESTRICT options, but since a role # might own objects in multiple databases, there is presently no way to # implement either cascading or restricting. Caveat DBA. ############################################################################# DropRoleStmt : DropRoleStmt_1 | DropRoleStmt_2 DropRoleStmt_1: DROP ROLE name_list DropRoleStmt_2: DROP ROLE IF EXISTS name_list ############################################################################# # # Drop a postgresql DBMS user # # XXX Ideally this would have CASCADE/RESTRICT options, but since a user # might own objects in multiple databases, there is presently no way to # implement either cascading or restricting. Caveat DBA. ############################################################################# DropUserStmt : DropUserStmt_1 | DropUserStmt_2 DropUserStmt_1: DROP USER name_list DropUserStmt_2: DROP USER IF EXISTS name_list ############################################################################# # # Create a postgresql group (role without login ability) # ############################################################################# CreateGroupStmt: CREATE GROUP RoleId opt_with OptRoleList ############################################################################# # # Alter a postgresql group # ############################################################################# AlterGroupStmt: ALTER GROUP RoleId add_drop USER name_list add_drop : add_drop_1 | add_drop_2 add_drop_1: ADD add_drop_2: DROP ############################################################################# # # Drop a postgresql group # # XXX see above notes about cascading DROP USER; groups have same problem. ############################################################################# DropGroupStmt : DropGroupStmt_1 | DropGroupStmt_2 DropGroupStmt_1: DROP GROUP name_list DropGroupStmt_2: DROP GROUP IF EXISTS name_list ############################################################################# # # Manipulate a schema # ############################################################################# CreateSchemaStmt : CreateSchemaStmt_1 | CreateSchemaStmt_2 CreateSchemaStmt_1: CREATE SCHEMA OptSchemaName AUTHORIZATION RoleId OptSchemaEltList CreateSchemaStmt_2: CREATE SCHEMA ColId OptSchemaEltList OptSchemaName: ColId? OptSchemaEltList: schema_stmt* % ~ # schema_stmt are the ones that can show up inside a CREATE SCHEMA # statement (in addition to by themselves). schema_stmt: CreateStmt | IndexStmt | CreateSeqStmt | CreateTrigStmt | GrantStmt | ViewStmt ############################################################################# # # Set PG internal variable # SET name TO 'var_value' # Include SQL92 syntax (thomas 1997-10-22): # SET TIME ZONE 'var_value' # ############################################################################# VariableSetStmt : VariableSetStmt_1 | VariableSetStmt_2 | VariableSetStmt_3 VariableSetStmt_1: SET set_rest VariableSetStmt_2: SET LOCAL set_rest VariableSetStmt_3: SET SESSION set_rest set_rest : set_rest_1 | set_rest_2 | set_rest_more set_rest_1: TRANSACTION transaction_mode_list set_rest_2: SESSION CHARACTERISTICS AS TRANSACTION transaction_mode_list set_rest_more : set_rest_more_1 | set_rest_more_2 | set_rest_more_3 | set_rest_more_4 | set_rest_more_5 | set_rest_more_6 | set_rest_more_7 | set_rest_more_8 | set_rest_more_9 | set_rest_more_10 | set_rest_more_11 | set_rest_more_12 | set_rest_more_13 | set_rest_more_14 set_rest_more_1 : # Generic SET syntaxes: var_name TO var_list set_rest_more_2 : var_name var_list set_rest_more_3 : var_name TO DEFAULT set_rest_more_4 : var_name DEFAULT set_rest_more_5 : var_name FROM CURRENT set_rest_more_6 : TIME ZONE zone_value set_rest_more_7 : CATALOG Sconst set_rest_more_8 : SCHEMA Sconst set_rest_more_9 : NAMES opt_encoding set_rest_more_10: ROLE ColId_or_Sconst set_rest_more_11: SESSION AUTHORIZATION ColId_or_Sconst set_rest_more_12: SESSION AUTHORIZATION DEFAULT set_rest_more_13: XML OPTION document_or_content set_rest_more_14: TRANSACTION SNAPSHOT Sconst var_name: ColId+ % / ~ ~ / var_list: var_value+ % / ~ ~ / var_value : var_value_1 | var_value_2 var_value_1: opt_boolean_or_string var_value_2: NumericOnly iso_level : iso_level_1 | iso_level_2 | iso_level_3 | iso_level_4 iso_level_1: READ UNCOMMITTED iso_level_2: READ COMMITTED iso_level_3: REPEATABLE READ iso_level_4: SERIALIZABLE opt_boolean_or_string : opt_boolean_or_string_1 | opt_boolean_or_string_2 | opt_boolean_or_string_3 | opt_boolean_or_string_4 opt_boolean_or_string_1: TRUE opt_boolean_or_string_2: FALSE opt_boolean_or_string_3: ON opt_boolean_or_string_4: ColId_or_Sconst # Timezone values can be: # - a string such as 'pst8pdt' # - an identifier such as "pst8pdt" # - an integer or floating point number # - a time interval per SQL99 # ColId gives reduce/reduce errors against ConstInterval and LOCAL, # so use IDENT (meaning we reject anything that is a key word). zone_value : zone_value_1 | zone_value_2 | zone_value_3 | zone_value_4 | zone_value_5 | zone_value_6 | zone_value_7 zone_value_1: Sconst zone_value_2: IDENT zone_value_3: ConstInterval Sconst opt_interval zone_value_4: ConstInterval Iconst Sconst opt_interval zone_value_5: NumericOnly zone_value_6: DEFAULT zone_value_7: LOCAL opt_encoding : ( opt_encoding_1 | opt_encoding_2 )? opt_encoding_1: Sconst opt_encoding_2: DEFAULT ColId_or_Sconst : ColId_or_Sconst_1 | ColId_or_Sconst_2 ColId_or_Sconst_1: ColId ColId_or_Sconst_2: Sconst VariableResetStmt : VariableResetStmt_1 | VariableResetStmt_2 | VariableResetStmt_3 | VariableResetStmt_4 | VariableResetStmt_5 VariableResetStmt_1: RESET var_name VariableResetStmt_2: RESET TIME ZONE VariableResetStmt_3: RESET TRANSACTION ISOLATION LEVEL VariableResetStmt_4: RESET SESSION AUTHORIZATION VariableResetStmt_5: RESET ALL # SetResetClause allows SET or RESET without LOCAL SetResetClause : SetResetClause_1 | SetResetClause_2 SetResetClause_1: SET set_rest SetResetClause_2: VariableResetStmt # SetResetClause allows SET or RESET without LOCAL FunctionSetResetClause : FunctionSetResetClause_1 | FunctionSetResetClause_2 FunctionSetResetClause_1: SET set_rest_more FunctionSetResetClause_2: VariableResetStmt VariableShowStmt : VariableShowStmt_1 | VariableShowStmt_2 | VariableShowStmt_3 | VariableShowStmt_4 | VariableShowStmt_5 VariableShowStmt_1: SHOW var_name VariableShowStmt_2: SHOW TIME ZONE VariableShowStmt_3: SHOW TRANSACTION ISOLATION LEVEL VariableShowStmt_4: SHOW SESSION AUTHORIZATION VariableShowStmt_5: SHOW ALL ConstraintsSetStmt: SET CONSTRAINTS constraints_set_list constraints_set_mode constraints_set_list : constraints_set_list_1 | constraints_set_list_2 constraints_set_list_1: ALL constraints_set_list_2: qualified_name_list constraints_set_mode : constraints_set_mode_1 | constraints_set_mode_2 constraints_set_mode_1: DEFERRED constraints_set_mode_2: IMMEDIATE # Checkpoint statement CheckPointStmt: CHECKPOINT ############################################################################# # # DISCARD { ALL | TEMP | PLANS } # ############################################################################# DiscardStmt : DiscardStmt_1 | DiscardStmt_2 | DiscardStmt_3 | DiscardStmt_4 DiscardStmt_1: DISCARD ALL DiscardStmt_2: DISCARD TEMP DiscardStmt_3: DISCARD TEMPORARY DiscardStmt_4: DISCARD PLANS ############################################################################# # # ALTER [ TABLE | INDEX | SEQUENCE | VIEW ] variations # # Note: we accept all subcommands for each of the four variants, and sort # out what's really legal at execution time. ############################################################################# AlterTableStmt : AlterTableStmt_1 | AlterTableStmt_2 | AlterTableStmt_3 | AlterTableStmt_4 | AlterTableStmt_5 | AlterTableStmt_6 | AlterTableStmt_7 | AlterTableStmt_8 AlterTableStmt_1: ALTER TABLE relation_expr alter_table_cmds AlterTableStmt_2: ALTER TABLE IF EXISTS relation_expr alter_table_cmds AlterTableStmt_3: ALTER INDEX qualified_name alter_table_cmds AlterTableStmt_4: ALTER INDEX IF EXISTS qualified_name alter_table_cmds AlterTableStmt_5: ALTER SEQUENCE qualified_name alter_table_cmds AlterTableStmt_6: ALTER SEQUENCE IF EXISTS qualified_name alter_table_cmds AlterTableStmt_7: ALTER VIEW qualified_name alter_table_cmds AlterTableStmt_8: ALTER VIEW IF EXISTS qualified_name alter_table_cmds alter_table_cmds: alter_table_cmd+ % / ~ ~ / alter_table_cmd : alter_table_cmd_1 | alter_table_cmd_2 | alter_table_cmd_3 | alter_table_cmd_4 | alter_table_cmd_5 | alter_table_cmd_6 | alter_table_cmd_7 | alter_table_cmd_8 | alter_table_cmd_9 | alter_table_cmd_10 | alter_table_cmd_11 | alter_table_cmd_12 | alter_table_cmd_13 | alter_table_cmd_14 | alter_table_cmd_15 | alter_table_cmd_16 | alter_table_cmd_17 | alter_table_cmd_18 | alter_table_cmd_19 | alter_table_cmd_20 | alter_table_cmd_21 | alter_table_cmd_22 | alter_table_cmd_23 | alter_table_cmd_24 | alter_table_cmd_25 | alter_table_cmd_26 | alter_table_cmd_27 | alter_table_cmd_28 | alter_table_cmd_29 | alter_table_cmd_30 | alter_table_cmd_31 | alter_table_cmd_32 | alter_table_cmd_33 | alter_table_cmd_34 | alter_table_cmd_35 | alter_table_cmd_36 | alter_table_cmd_37 | alter_table_cmd_38 | alter_table_cmd_39 | alter_table_cmd_40 | alter_table_cmd_41 | alter_table_cmd_42 alter_table_cmd_1 : # ALTER TABLE ADD ADD columnDef alter_table_cmd_2 : ADD COLUMN columnDef alter_table_cmd_3 : ALTER opt_column ColId alter_column_default alter_table_cmd_4 : ALTER opt_column ColId DROP NOT NULL alter_table_cmd_5 : ALTER opt_column ColId SET NOT NULL alter_table_cmd_6 : ALTER opt_column ColId SET STATISTICS SignedIconst alter_table_cmd_7 : ALTER opt_column ColId SET reloptions alter_table_cmd_8 : ALTER opt_column ColId RESET reloptions alter_table_cmd_9 : ALTER opt_column ColId SET STORAGE ColId alter_table_cmd_10: DROP opt_column IF EXISTS ColId opt_drop_behavior alter_table_cmd_11: DROP opt_column ColId opt_drop_behavior alter_table_cmd_12: ALTER opt_column ColId opt_set_data TYPE Typename opt_collate_clause alter_using alter_table_cmd_13: ALTER opt_column ColId alter_generic_options alter_table_cmd_14: ADD TableConstraint alter_table_cmd_15: VALIDATE CONSTRAINT name alter_table_cmd_16: DROP CONSTRAINT IF EXISTS name opt_drop_behavior alter_table_cmd_17: DROP CONSTRAINT name opt_drop_behavior alter_table_cmd_18: SET WITH OIDS alter_table_cmd_19: SET WITHOUT OIDS alter_table_cmd_20: CLUSTER ON name alter_table_cmd_21: SET WITHOUT CLUSTER alter_table_cmd_22: ENABLE TRIGGER name alter_table_cmd_23: ENABLE ALWAYS TRIGGER name alter_table_cmd_24: ENABLE REPLICA TRIGGER name alter_table_cmd_25: ENABLE TRIGGER ALL alter_table_cmd_26: ENABLE TRIGGER USER alter_table_cmd_27: DISABLE TRIGGER name alter_table_cmd_28: DISABLE TRIGGER ALL alter_table_cmd_29: DISABLE TRIGGER USER alter_table_cmd_30: ENABLE RULE name alter_table_cmd_31: ENABLE ALWAYS RULE name alter_table_cmd_32: ENABLE REPLICA RULE name alter_table_cmd_33: DISABLE RULE name alter_table_cmd_34: INHERIT qualified_name alter_table_cmd_35: NO INHERIT qualified_name alter_table_cmd_36: OF any_name alter_table_cmd_37: NOT OF alter_table_cmd_38: OWNER TO RoleId alter_table_cmd_39: SET TABLESPACE name alter_table_cmd_40: SET reloptions alter_table_cmd_41: RESET reloptions alter_table_cmd_42: alter_generic_options alter_column_default : alter_column_default_1 | alter_column_default_2 alter_column_default_1: SET DEFAULT a_expr alter_column_default_2: DROP DEFAULT opt_drop_behavior : ( opt_drop_behavior_1 | opt_drop_behavior_2 )? opt_drop_behavior_1: CASCADE opt_drop_behavior_2: RESTRICT opt_collate_clause: ( COLLATE any_name )? alter_using: ( USING a_expr )? reloptions: reloption_list opt_reloptions: ( WITH reloptions )? reloption_list: reloption_elem+ % / ~ ~ / # This should match def_elem and also allow qualified names reloption_elem : reloption_elem_1 | reloption_elem_2 | reloption_elem_3 | reloption_elem_4 reloption_elem_1: ColLabel def_arg reloption_elem_2: ColLabel reloption_elem_3: ColLabel ColLabel def_arg reloption_elem_4: ColLabel ColLabel ############################################################################# # # ALTER TYPE # # really variants of the ALTER TABLE subcommands with different spellings ############################################################################# AlterCompositeTypeStmt: ALTER TYPE any_name alter_type_cmds alter_type_cmds: alter_type_cmd+ % / ~ ~ / alter_type_cmd : alter_type_cmd_1 | alter_type_cmd_2 | alter_type_cmd_3 | alter_type_cmd_4 alter_type_cmd_1: # ALTER TYPE ADD ATTRIBUTE [RESTRICT|CASCADE] ADD ATTRIBUTE TableFuncElement opt_drop_behavior alter_type_cmd_2: DROP ATTRIBUTE IF EXISTS ColId opt_drop_behavior alter_type_cmd_3: DROP ATTRIBUTE ColId opt_drop_behavior alter_type_cmd_4: ALTER ATTRIBUTE ColId opt_set_data TYPE Typename opt_collate_clause opt_drop_behavior ############################################################################# # # QUERY : # close # ############################################################################# ClosePortalStmt : ClosePortalStmt_1 | ClosePortalStmt_2 ClosePortalStmt_1: CLOSE cursor_name ClosePortalStmt_2: CLOSE ALL ############################################################################# # # QUERY : # COPY relname [(columnList)] FROM/TO file [WITH] [(options)] # COPY ( SELECT ... ) TO file [WITH] [(options)] # # In the preferred syntax the options are comma-separated # and use generic identifiers instead of keywords. The pre-9.0 # syntax had a hard-wired, space-separated set of options. # # Really old syntax, from versions 7.2 and prior: # COPY [ BINARY ] table [ WITH OIDS ] FROM/TO file # [ [ USING ] DELIMITERS 'delimiter' ] ] # [ WITH NULL AS 'null string' ] # This option placement is not supported with COPY (SELECT...). # ############################################################################# CopyStmt : CopyStmt_1 | CopyStmt_2 CopyStmt_1: COPY opt_binary qualified_name opt_column_list opt_oids copy_from copy_file_name copy_delimiter opt_with copy_options CopyStmt_2: COPY select_with_parens TO copy_file_name opt_with copy_options copy_from : copy_from_1 | copy_from_2 copy_from_1: FROM copy_from_2: TO # copy_file_name NULL indicates stdio is used. Whether stdin or stdout is # used depends on the direction. (It really doesn't make sense to copy from # stdout. We silently correct the "typo".) - AY 9/94 copy_file_name : copy_file_name_1 | copy_file_name_2 | copy_file_name_3 copy_file_name_1: Sconst copy_file_name_2: STDIN copy_file_name_3: STDOUT copy_options : copy_options_1 | copy_options_2 copy_options_1: copy_opt_list copy_options_2: copy_generic_opt_list # old COPY option syntax copy_opt_list: copy_opt_item* % ~ copy_opt_item : copy_opt_item_1 | copy_opt_item_2 | copy_opt_item_3 | copy_opt_item_4 | copy_opt_item_5 | copy_opt_item_6 | copy_opt_item_7 | copy_opt_item_8 | copy_opt_item_9 | copy_opt_item_10 | copy_opt_item_11 | copy_opt_item_12 copy_opt_item_1 : BINARY copy_opt_item_2 : OIDS copy_opt_item_3 : DELIMITER opt_as Sconst copy_opt_item_4 : NULL opt_as Sconst copy_opt_item_5 : CSV copy_opt_item_6 : HEADER copy_opt_item_7 : QUOTE opt_as Sconst copy_opt_item_8 : ESCAPE opt_as Sconst copy_opt_item_9 : FORCE QUOTE columnList copy_opt_item_10: FORCE QUOTE copy_opt_item_11: FORCE NOT NULL columnList copy_opt_item_12: ENCODING Sconst # The following exist for backward compatibility with very old versions opt_binary: BINARY? opt_oids: ( WITH OIDS )? copy_delimiter: ( opt_using DELIMITERS Sconst )? opt_using: USING? # new COPY option syntax copy_generic_opt_list: copy_generic_opt_elem+ % / ~ ~ / copy_generic_opt_elem: ColLabel copy_generic_opt_arg copy_generic_opt_arg : ( copy_generic_opt_arg_1 | copy_generic_opt_arg_2 | copy_generic_opt_arg_3 | copy_generic_opt_arg_4 )? copy_generic_opt_arg_1: opt_boolean_or_string copy_generic_opt_arg_2: NumericOnly copy_generic_opt_arg_3: copy_generic_opt_arg_4: copy_generic_opt_arg_list copy_generic_opt_arg_list: copy_generic_opt_arg_list_item+ % / ~ ~ / # beware of emitting non-string list elements here; see commands/define.c copy_generic_opt_arg_list_item: opt_boolean_or_string ############################################################################# # # QUERY : # CREATE TABLE relname # ############################################################################# CreateStmt : CreateStmt_1 | CreateStmt_2 | CreateStmt_3 | CreateStmt_4 CreateStmt_1: CREATE OptTemp TABLE qualified_name OptTableElementList OptInherit OptWith OnCommitOption OptTableSpace CreateStmt_2: CREATE OptTemp TABLE IF NOT EXISTS qualified_name OptTableElementList OptInherit OptWith OnCommitOption OptTableSpace CreateStmt_3: CREATE OptTemp TABLE qualified_name OF any_name OptTypedTableElementList OptWith OnCommitOption OptTableSpace CreateStmt_4: CREATE OptTemp TABLE IF NOT EXISTS qualified_name OF any_name OptTypedTableElementList OptWith OnCommitOption OptTableSpace # Redundancy here is needed to avoid shift/reduce conflicts, # since TEMP is not a reserved word. See also OptTempTableName. # # NOTE: we accept both GLOBAL and LOCAL options; since we have no modules # the LOCAL keyword is really meaningless. OptTemp : ( OptTemp_1 | OptTemp_2 | OptTemp_3 | OptTemp_4 | OptTemp_5 | OptTemp_6 | OptTemp_7 )? OptTemp_1: TEMPORARY OptTemp_2: TEMP OptTemp_3: LOCAL TEMPORARY OptTemp_4: LOCAL TEMP OptTemp_5: GLOBAL TEMPORARY OptTemp_6: GLOBAL TEMP OptTemp_7: UNLOGGED OptTableElementList: TableElementList? OptTypedTableElementList: ( TypedTableElementList )? TableElementList: TableElement+ % / ~ ~ / TypedTableElementList: TypedTableElement+ % / ~ ~ / TableElement : TableElement_1 | TableElement_2 | TableElement_3 TableElement_1: columnDef TableElement_2: TableLikeClause TableElement_3: TableConstraint TypedTableElement : TypedTableElement_1 | TypedTableElement_2 TypedTableElement_1: columnOptions TypedTableElement_2: TableConstraint columnDef: ColId Typename create_generic_options ColQualList columnOptions: ColId WITH OPTIONS ColQualList ColQualList: ColConstraint* % ~ ColConstraint : ColConstraint_1 | ColConstraint_2 | ColConstraint_3 | ColConstraint_4 ColConstraint_1: CONSTRAINT name ColConstraintElem ColConstraint_2: ColConstraintElem ColConstraint_3: ConstraintAttr ColConstraint_4: COLLATE any_name # DEFAULT NULL is already the default for Postgres. # But define it here and carry it forward into the system # to make it explicit. # - thomas 1998-09-13 # # WITH NULL and NULL are not SQL92-standard syntax elements, # so leave them out. Use DEFAULT NULL to explicitly indicate # that a column may have that value. WITH NULL leads to # shift/reduce conflicts with WITH TIME ZONE anyway. # - thomas 1999-01-08 # # DEFAULT expression must be b_expr not a_expr to prevent shift/reduce # conflict on NOT (since NOT might start a subsequent NOT NULL constraint, # or be part of a_expr NOT LIKE or similar constructs). ColConstraintElem : ColConstraintElem_1 | ColConstraintElem_2 | ColConstraintElem_3 | ColConstraintElem_4 | ColConstraintElem_5 | ColConstraintElem_6 | ColConstraintElem_7 ColConstraintElem_1: NOT NULL ColConstraintElem_2: NULL ColConstraintElem_3: UNIQUE opt_definition OptConsTableSpace ColConstraintElem_4: PRIMARY KEY opt_definition OptConsTableSpace ColConstraintElem_5: CHECK a_expr opt_no_inherit ColConstraintElem_6: DEFAULT b_expr ColConstraintElem_7: REFERENCES qualified_name opt_column_list key_match key_actions # ConstraintAttr represents constraint attributes, which we parse as if # they were independent constraint clauses, in order to avoid shift/reduce # conflicts (since NOT might start either an independent NOT NULL clause # or an attribute). parse_utilcmd.c is responsible for attaching the # attribute information to the preceding "real" constraint node, and for # complaining if attribute clauses appear in the wrong place or wrong # combinations. # # See also ConstraintAttributeSpec, which can be used in places where # there is no parsing conflict. (Note: currently, NOT VALID and NO INHERIT # are allowed clauses in ConstraintAttributeSpec, but not here. Someday we # might need to allow them here too, but for the moment it doesn't seem # useful in the statements that use ConstraintAttr.) ConstraintAttr : ConstraintAttr_1 | ConstraintAttr_2 | ConstraintAttr_3 | ConstraintAttr_4 ConstraintAttr_1: DEFERRABLE ConstraintAttr_2: NOT DEFERRABLE ConstraintAttr_3: INITIALLY DEFERRED ConstraintAttr_4: INITIALLY IMMEDIATE TableLikeClause: LIKE qualified_name TableLikeOptionList TableLikeOptionList : TableLikeOptionList_1 | TableLikeOptionList_2 TableLikeOptionList_1: TableLikeOption* % / ~ INCLUDING ~ / TableLikeOptionList_2: 2+ % / ~ EXCLUDING ~ / TableLikeOption : TableLikeOption_1 | TableLikeOption_2 | TableLikeOption_3 | TableLikeOption_4 | TableLikeOption_5 | TableLikeOption_6 TableLikeOption_1: DEFAULTS TableLikeOption_2: CONSTRAINTS TableLikeOption_3: INDEXES TableLikeOption_4: STORAGE TableLikeOption_5: COMMENTS TableLikeOption_6: ALL # ConstraintElem specifies constraint syntax which is not embedded into # a column definition. ColConstraintElem specifies the embedded form. # - thomas 1997-12-03 TableConstraint : TableConstraint_1 | TableConstraint_2 TableConstraint_1: CONSTRAINT name ConstraintElem TableConstraint_2: ConstraintElem ConstraintElem : ConstraintElem_1 | ConstraintElem_2 | ConstraintElem_3 | ConstraintElem_4 | ConstraintElem_5 | ConstraintElem_6 | ConstraintElem_7 ConstraintElem_1: CHECK a_expr ConstraintAttributeSpec ConstraintElem_2: UNIQUE columnList opt_definition OptConsTableSpace ConstraintAttributeSpec ConstraintElem_3: UNIQUE ExistingIndex ConstraintAttributeSpec ConstraintElem_4: PRIMARY KEY columnList opt_definition OptConsTableSpace ConstraintAttributeSpec ConstraintElem_5: PRIMARY KEY ExistingIndex ConstraintAttributeSpec ConstraintElem_6: EXCLUDE access_method_clause ExclusionConstraintList opt_definition OptConsTableSpace ExclusionWhereClause ConstraintAttributeSpec ConstraintElem_7: FOREIGN KEY columnList REFERENCES qualified_name opt_column_list key_match key_actions ConstraintAttributeSpec opt_no_inherit: ( NO INHERIT )? opt_column_list: ( columnList )? columnList: columnElem+ % / ~ ~ / columnElem: ColId key_match : ( key_match_1 | key_match_2 | key_match_3 )? key_match_1: MATCH FULL key_match_2: MATCH PARTIAL key_match_3: MATCH SIMPLE ExclusionConstraintList: ExclusionConstraintElem+ % / ~ ~ / ExclusionConstraintElem : ExclusionConstraintElem_1 | ExclusionConstraintElem_2 ExclusionConstraintElem_1: index_elem WITH any_operator ExclusionConstraintElem_2: index_elem WITH OPERATOR any_operator ExclusionWhereClause: ( WHERE a_expr )? # We combine the update and delete actions into one value temporarily # for simplicity of parsing, and then break them down again in the # calling production. update is in the left 8 bits, delete in the right. # Note that NOACTION is the default. key_actions : ( key_actions_1 | key_actions_2 | key_actions_3 | key_actions_4 )? key_actions_1: key_update key_actions_2: key_delete key_actions_3: key_update key_delete key_actions_4: key_delete key_update key_update: ON UPDATE key_action key_delete: ON DELETE key_action key_action : key_action_1 | key_action_2 | key_action_3 | key_action_4 | key_action_5 key_action_1: NO ACTION key_action_2: RESTRICT key_action_3: CASCADE key_action_4: SET NULL key_action_5: SET DEFAULT OptInherit: ( INHERITS qualified_name_list )? # WITH (options) is preferred, WITH OIDS and WITHOUT OIDS are legacy forms OptWith : ( OptWith_1 | OptWith_2 | OptWith_3 )? OptWith_1: WITH reloptions OptWith_2: WITH OIDS OptWith_3: WITHOUT OIDS OnCommitOption : ( OnCommitOption_1 | OnCommitOption_2 | OnCommitOption_3 )? OnCommitOption_1: ON COMMIT DROP OnCommitOption_2: ON COMMIT DELETE ROWS OnCommitOption_3: ON COMMIT PRESERVE ROWS OptTableSpace: ( TABLESPACE name )? OptConsTableSpace: ( USING INDEX TABLESPACE name )? ExistingIndex: USING INDEX index_name ############################################################################# # # QUERY : # CREATE TABLE relname AS SelectStmt [ WITH [NO] DATA ] # # # Note: SELECT ... INTO is a now-deprecated alternative for this. # ############################################################################# CreateAsStmt: CREATE OptTemp TABLE create_as_target AS SelectStmt opt_with_data create_as_target: qualified_name opt_column_list OptWith OnCommitOption OptTableSpace opt_with_data : ( opt_with_data_1 | opt_with_data_2 )? opt_with_data_1: WITH DATA opt_with_data_2: WITH NO DATA ############################################################################# # # QUERY : # CREATE SEQUENCE seqname # ALTER SEQUENCE seqname # ############################################################################# CreateSeqStmt: CREATE OptTemp SEQUENCE qualified_name OptSeqOptList AlterSeqStmt : AlterSeqStmt_1 | AlterSeqStmt_2 AlterSeqStmt_1: ALTER SEQUENCE qualified_name SeqOptList AlterSeqStmt_2: ALTER SEQUENCE IF EXISTS qualified_name SeqOptList OptSeqOptList: SeqOptList? SeqOptList: SeqOptElem+ % ~ SeqOptElem : SeqOptElem_1 | SeqOptElem_2 | SeqOptElem_3 | SeqOptElem_4 | SeqOptElem_5 | SeqOptElem_6 | SeqOptElem_7 | SeqOptElem_8 | SeqOptElem_9 | SeqOptElem_10 | SeqOptElem_11 | SeqOptElem_12 SeqOptElem_1 : CACHE NumericOnly SeqOptElem_2 : CYCLE SeqOptElem_3 : NO CYCLE SeqOptElem_4 : INCREMENT opt_by NumericOnly SeqOptElem_5 : MAXVALUE NumericOnly SeqOptElem_6 : MINVALUE NumericOnly SeqOptElem_7 : NO MAXVALUE SeqOptElem_8 : NO MINVALUE SeqOptElem_9 : OWNED BY any_name SeqOptElem_10: START opt_with NumericOnly SeqOptElem_11: RESTART SeqOptElem_12: RESTART opt_with NumericOnly opt_by: BY? NumericOnly : NumericOnly_1 | NumericOnly_2 | NumericOnly_3 NumericOnly_1: FCONST NumericOnly_2: FCONST NumericOnly_3: SignedIconst NumericOnly_list: NumericOnly+ % / ~ ~ / ############################################################################# # # QUERIES : # CREATE [OR REPLACE] [TRUSTED] [PROCEDURAL] LANGUAGE ... # DROP [PROCEDURAL] LANGUAGE ... # ############################################################################# CreatePLangStmt : CreatePLangStmt_1 | CreatePLangStmt_2 CreatePLangStmt_1: CREATE opt_or_replace opt_trusted opt_procedural LANGUAGE ColId_or_Sconst CreatePLangStmt_2: CREATE opt_or_replace opt_trusted opt_procedural LANGUAGE ColId_or_Sconst HANDLER handler_name opt_inline_handler opt_validator opt_trusted: TRUSTED? # This ought to be just func_name, but that causes reduce/reduce conflicts # (CREATE LANGUAGE is the only place where func_name isn't followed by '('). # Work around by using simple names, instead. handler_name : handler_name_1 | handler_name_2 handler_name_1: name handler_name_2: name attrs opt_inline_handler: ( INLINE handler_name )? validator_clause : validator_clause_1 | validator_clause_2 validator_clause_1: VALIDATOR handler_name validator_clause_2: NO VALIDATOR opt_validator: validator_clause? DropPLangStmt : DropPLangStmt_1 | DropPLangStmt_2 DropPLangStmt_1: DROP opt_procedural LANGUAGE ColId_or_Sconst opt_drop_behavior DropPLangStmt_2: DROP opt_procedural LANGUAGE IF EXISTS ColId_or_Sconst opt_drop_behavior opt_procedural: PROCEDURAL? ############################################################################# # # QUERY: # CREATE TABLESPACE tablespace LOCATION '/path/to/tablespace/'' # ############################################################################# CreateTableSpaceStmt: CREATE TABLESPACE name OptTableSpaceOwner LOCATION Sconst OptTableSpaceOwner: ( OWNER name )? ############################################################################# # # QUERY : # DROP TABLESPACE # # No need for drop behaviour as we cannot implement dependencies for # objects in other databases; we can only support RESTRICT. # ############################################################################ DropTableSpaceStmt : DropTableSpaceStmt_1 | DropTableSpaceStmt_2 DropTableSpaceStmt_1: DROP TABLESPACE name DropTableSpaceStmt_2: DROP TABLESPACE IF EXISTS name ############################################################################# # # QUERY: # CREATE EXTENSION extension # [ WITH ] [ SCHEMA schema ] [ VERSION version ] [ FROM oldversion ] # ############################################################################# CreateExtensionStmt : CreateExtensionStmt_1 | CreateExtensionStmt_2 CreateExtensionStmt_1: CREATE EXTENSION name opt_with create_extension_opt_list CreateExtensionStmt_2: CREATE EXTENSION IF NOT EXISTS name opt_with create_extension_opt_list create_extension_opt_list: create_extension_opt_item* % ~ create_extension_opt_item : create_extension_opt_item_1 | create_extension_opt_item_2 | create_extension_opt_item_3 create_extension_opt_item_1: SCHEMA name create_extension_opt_item_2: VERSION ColId_or_Sconst create_extension_opt_item_3: FROM ColId_or_Sconst ############################################################################# # # ALTER EXTENSION name UPDATE [ TO version ] # ############################################################################# AlterExtensionStmt: ALTER EXTENSION name UPDATE alter_extension_opt_list alter_extension_opt_list: alter_extension_opt_item* % ~ alter_extension_opt_item: TO ColId_or_Sconst ############################################################################# # # ALTER EXTENSION name ADD/DROP object-identifier # ############################################################################# AlterExtensionContentsStmt : AlterExtensionContentsStmt_1 | AlterExtensionContentsStmt_2 | AlterExtensionContentsStmt_3 | AlterExtensionContentsStmt_4 | AlterExtensionContentsStmt_5 | AlterExtensionContentsStmt_6 | AlterExtensionContentsStmt_7 | AlterExtensionContentsStmt_8 | AlterExtensionContentsStmt_9 | AlterExtensionContentsStmt_10 | AlterExtensionContentsStmt_11 | AlterExtensionContentsStmt_12 | AlterExtensionContentsStmt_13 | AlterExtensionContentsStmt_14 | AlterExtensionContentsStmt_15 | AlterExtensionContentsStmt_16 | AlterExtensionContentsStmt_17 | AlterExtensionContentsStmt_18 | AlterExtensionContentsStmt_19 | AlterExtensionContentsStmt_20 | AlterExtensionContentsStmt_21 | AlterExtensionContentsStmt_22 | AlterExtensionContentsStmt_23 AlterExtensionContentsStmt_1 : ALTER EXTENSION name add_drop AGGREGATE func_name aggr_args AlterExtensionContentsStmt_2 : ALTER EXTENSION name add_drop CAST Typename AS Typename AlterExtensionContentsStmt_3 : ALTER EXTENSION name add_drop COLLATION any_name AlterExtensionContentsStmt_4 : ALTER EXTENSION name add_drop CONVERSION any_name AlterExtensionContentsStmt_5 : ALTER EXTENSION name add_drop DOMAIN any_name AlterExtensionContentsStmt_6 : ALTER EXTENSION name add_drop FUNCTION function_with_argtypes AlterExtensionContentsStmt_7 : ALTER EXTENSION name add_drop opt_procedural LANGUAGE name AlterExtensionContentsStmt_8 : ALTER EXTENSION name add_drop OPERATOR any_operator oper_argtypes AlterExtensionContentsStmt_9 : ALTER EXTENSION name add_drop OPERATOR CLASS any_name USING access_method AlterExtensionContentsStmt_10: ALTER EXTENSION name add_drop OPERATOR FAMILY any_name USING access_method AlterExtensionContentsStmt_11: ALTER EXTENSION name add_drop SCHEMA name AlterExtensionContentsStmt_12: ALTER EXTENSION name add_drop TABLE any_name AlterExtensionContentsStmt_13: ALTER EXTENSION name add_drop EVENT TRIGGER name AlterExtensionContentsStmt_14: ALTER EXTENSION name add_drop TEXT SEARCH PARSER any_name AlterExtensionContentsStmt_15: ALTER EXTENSION name add_drop TEXT SEARCH DICTIONARY any_name AlterExtensionContentsStmt_16: ALTER EXTENSION name add_drop TEXT SEARCH TEMPLATE any_name AlterExtensionContentsStmt_17: ALTER EXTENSION name add_drop TEXT SEARCH CONFIGURATION any_name AlterExtensionContentsStmt_18: ALTER EXTENSION name add_drop SEQUENCE any_name AlterExtensionContentsStmt_19: ALTER EXTENSION name add_drop VIEW any_name AlterExtensionContentsStmt_20: ALTER EXTENSION name add_drop FOREIGN TABLE any_name AlterExtensionContentsStmt_21: ALTER EXTENSION name add_drop FOREIGN DATA WRAPPER name AlterExtensionContentsStmt_22: ALTER EXTENSION name add_drop SERVER name AlterExtensionContentsStmt_23: ALTER EXTENSION name add_drop TYPE any_name ############################################################################# # # QUERY: # CREATE FOREIGN DATA WRAPPER name options # ############################################################################# CreateFdwStmt: CREATE FOREIGN DATA WRAPPER name opt_fdw_options create_generic_options fdw_option : fdw_option_1 | fdw_option_2 | fdw_option_3 | fdw_option_4 fdw_option_1: HANDLER handler_name fdw_option_2: NO HANDLER fdw_option_3: VALIDATOR handler_name fdw_option_4: NO VALIDATOR fdw_options: fdw_option+ % ~ opt_fdw_options: fdw_options? ############################################################################# # # QUERY : # DROP FOREIGN DATA WRAPPER name # ############################################################################ DropFdwStmt : DropFdwStmt_1 | DropFdwStmt_2 DropFdwStmt_1: DROP FOREIGN DATA WRAPPER name opt_drop_behavior DropFdwStmt_2: DROP FOREIGN DATA WRAPPER IF EXISTS name opt_drop_behavior ############################################################################# # # QUERY : # ALTER FOREIGN DATA WRAPPER name options # ############################################################################ AlterFdwStmt : AlterFdwStmt_1 | AlterFdwStmt_2 AlterFdwStmt_1: ALTER FOREIGN DATA WRAPPER name opt_fdw_options alter_generic_options AlterFdwStmt_2: ALTER FOREIGN DATA WRAPPER name fdw_options # Options definition for CREATE FDW, SERVER and USER MAPPING create_generic_options: ( OPTIONS generic_option_list )? generic_option_list: generic_option_elem+ % / ~ ~ / # Options definition for ALTER FDW, SERVER and USER MAPPING alter_generic_options: OPTIONS alter_generic_option_list alter_generic_option_list: alter_generic_option_elem+ % / ~ ~ / alter_generic_option_elem : alter_generic_option_elem_1 | alter_generic_option_elem_2 | alter_generic_option_elem_3 | alter_generic_option_elem_4 alter_generic_option_elem_1: generic_option_elem alter_generic_option_elem_2: SET generic_option_elem alter_generic_option_elem_3: ADD generic_option_elem alter_generic_option_elem_4: DROP generic_option_name generic_option_elem: generic_option_name generic_option_arg generic_option_name: ColLabel # We could use def_arg here, but the spec only requires string literals generic_option_arg: Sconst ############################################################################# # # QUERY: # CREATE SERVER name [TYPE] [VERSION] [OPTIONS] # ############################################################################# CreateForeignServerStmt: CREATE SERVER name opt_type opt_foreign_server_version FOREIGN DATA WRAPPER name create_generic_options opt_type: ( TYPE Sconst )? foreign_server_version : foreign_server_version_1 | foreign_server_version_2 foreign_server_version_1: VERSION Sconst foreign_server_version_2: VERSION NULL opt_foreign_server_version: foreign_server_version? ############################################################################# # # QUERY : # DROP SERVER name # ############################################################################ DropForeignServerStmt : DropForeignServerStmt_1 | DropForeignServerStmt_2 DropForeignServerStmt_1: DROP SERVER name opt_drop_behavior DropForeignServerStmt_2: DROP SERVER IF EXISTS name opt_drop_behavior ############################################################################# # # QUERY : # ALTER SERVER name [VERSION] [OPTIONS] # ############################################################################ AlterForeignServerStmt : AlterForeignServerStmt_1 | AlterForeignServerStmt_2 | AlterForeignServerStmt_3 AlterForeignServerStmt_1: ALTER SERVER name foreign_server_version alter_generic_options AlterForeignServerStmt_2: ALTER SERVER name foreign_server_version AlterForeignServerStmt_3: ALTER SERVER name alter_generic_options ############################################################################# # # QUERY: # CREATE FOREIGN TABLE relname (...) SERVER name (...) # ############################################################################# CreateForeignTableStmt : CreateForeignTableStmt_1 | CreateForeignTableStmt_2 CreateForeignTableStmt_1: CREATE FOREIGN TABLE qualified_name OptForeignTableElementList SERVER name create_generic_options CreateForeignTableStmt_2: CREATE FOREIGN TABLE IF NOT EXISTS qualified_name OptForeignTableElementList SERVER name create_generic_options OptForeignTableElementList : OptForeignTableElementList_1 | OptForeignTableElementList_2 OptForeignTableElementList_1: ForeignTableElementList OptForeignTableElementList_2: ForeignTableElementList: ForeignTableElement+ % / ~ ~ / ForeignTableElement: columnDef ############################################################################# # # QUERY: # ALTER FOREIGN TABLE relname [...] # ############################################################################# AlterForeignTableStmt : AlterForeignTableStmt_1 | AlterForeignTableStmt_2 AlterForeignTableStmt_1: ALTER FOREIGN TABLE relation_expr alter_table_cmds AlterForeignTableStmt_2: ALTER FOREIGN TABLE IF EXISTS relation_expr alter_table_cmds ############################################################################# # # QUERY: # CREATE USER MAPPING FOR auth_ident SERVER name [OPTIONS] # ############################################################################# CreateUserMappingStmt: CREATE USER MAPPING FOR auth_ident SERVER name create_generic_options # User mapping authorization identifier auth_ident : auth_ident_1 | auth_ident_2 | auth_ident_3 auth_ident_1: CURRENT_USER auth_ident_2: USER auth_ident_3: RoleId ############################################################################# # # QUERY : # DROP USER MAPPING FOR auth_ident SERVER name # ############################################################################ DropUserMappingStmt : DropUserMappingStmt_1 | DropUserMappingStmt_2 DropUserMappingStmt_1: DROP USER MAPPING FOR auth_ident SERVER name DropUserMappingStmt_2: DROP USER MAPPING IF EXISTS FOR auth_ident SERVER name ############################################################################# # # QUERY : # ALTER USER MAPPING FOR auth_ident SERVER name OPTIONS # ############################################################################ AlterUserMappingStmt: ALTER USER MAPPING FOR auth_ident SERVER name alter_generic_options ############################################################################# # # QUERIES : # CREATE TRIGGER ... # DROP TRIGGER ... # ############################################################################# CreateTrigStmt : CreateTrigStmt_1 | CreateTrigStmt_2 CreateTrigStmt_1: CREATE TRIGGER name TriggerActionTime TriggerEvents ON qualified_name TriggerForSpec TriggerWhen EXECUTE PROCEDURE func_name TriggerFuncArgs CreateTrigStmt_2: CREATE CONSTRAINT TRIGGER name AFTER TriggerEvents ON qualified_name OptConstrFromTable ConstraintAttributeSpec FOR EACH ROW TriggerWhen EXECUTE PROCEDURE func_name TriggerFuncArgs TriggerActionTime : TriggerActionTime_1 | TriggerActionTime_2 | TriggerActionTime_3 TriggerActionTime_1: BEFORE TriggerActionTime_2: AFTER TriggerActionTime_3: INSTEAD OF TriggerEvents: TriggerOneEvent+ % / ~ OR ~ / TriggerOneEvent : TriggerOneEvent_1 | TriggerOneEvent_2 | TriggerOneEvent_3 | TriggerOneEvent_4 | TriggerOneEvent_5 TriggerOneEvent_1: INSERT TriggerOneEvent_2: DELETE TriggerOneEvent_3: UPDATE TriggerOneEvent_4: UPDATE OF columnList TriggerOneEvent_5: TRUNCATE TriggerForSpec: ( FOR TriggerForOptEach TriggerForType )? TriggerForOptEach: EACH? TriggerForType : TriggerForType_1 | TriggerForType_2 TriggerForType_1: ROW TriggerForType_2: STATEMENT TriggerWhen: ( WHEN a_expr )? TriggerFuncArgs : TriggerFuncArgs_1 | TriggerFuncArgs_2 TriggerFuncArgs_1: TriggerFuncArg TriggerFuncArgs_2: TriggerFuncArg* % / ~ ~ / TriggerFuncArg : TriggerFuncArg_1 | TriggerFuncArg_2 | TriggerFuncArg_3 | TriggerFuncArg_4 TriggerFuncArg_1: Iconst TriggerFuncArg_2: FCONST TriggerFuncArg_3: Sconst TriggerFuncArg_4: ColLabel OptConstrFromTable: ( FROM qualified_name )? ConstraintAttributeSpec: ConstraintAttributeElem* % ~ ConstraintAttributeElem : ConstraintAttributeElem_1 | ConstraintAttributeElem_2 | ConstraintAttributeElem_3 | ConstraintAttributeElem_4 | ConstraintAttributeElem_5 | ConstraintAttributeElem_6 ConstraintAttributeElem_1: NOT DEFERRABLE ConstraintAttributeElem_2: DEFERRABLE ConstraintAttributeElem_3: INITIALLY IMMEDIATE ConstraintAttributeElem_4: INITIALLY DEFERRED ConstraintAttributeElem_5: NOT VALID ConstraintAttributeElem_6: NO INHERIT DropTrigStmt : DropTrigStmt_1 | DropTrigStmt_2 DropTrigStmt_1: DROP TRIGGER name ON qualified_name opt_drop_behavior DropTrigStmt_2: DROP TRIGGER IF EXISTS name ON qualified_name opt_drop_behavior ############################################################################# # # QUERIES : # CREATE EVENT TRIGGER ... # DROP EVENT TRIGGER ... # ALTER EVENT TRIGGER ... # ############################################################################# CreateEventTrigStmt : CreateEventTrigStmt_1 | CreateEventTrigStmt_2 CreateEventTrigStmt_1: CREATE EVENT TRIGGER name ON ColLabel EXECUTE PROCEDURE func_name CreateEventTrigStmt_2: CREATE EVENT TRIGGER name ON ColLabel WHEN event_trigger_when_list EXECUTE PROCEDURE func_name event_trigger_when_list: event_trigger_when_item+ % / ~ AND ~ / event_trigger_when_item: ColId IN event_trigger_value_list event_trigger_value_list: SCONST+ % / ~ ~ / AlterEventTrigStmt: ALTER EVENT TRIGGER name enable_trigger enable_trigger : enable_trigger_1 | enable_trigger_2 | enable_trigger_3 | enable_trigger_4 enable_trigger_1: ENABLE enable_trigger_2: ENABLE REPLICA enable_trigger_3: ENABLE ALWAYS enable_trigger_4: DISABLE ############################################################################# # # QUERIES : # CREATE ASSERTION ... # DROP ASSERTION ... # ############################################################################# CreateAssertStmt: CREATE ASSERTION name CHECK a_expr ConstraintAttributeSpec DropAssertStmt: DROP ASSERTION name opt_drop_behavior ############################################################################# # # QUERY : # define (aggregate,operator,type) # ############################################################################# DefineStmt : DefineStmt_1 | DefineStmt_2 | DefineStmt_3 | DefineStmt_4 | DefineStmt_5 | DefineStmt_6 | DefineStmt_7 | DefineStmt_8 | DefineStmt_9 | DefineStmt_10 | DefineStmt_11 | DefineStmt_12 | DefineStmt_13 | DefineStmt_14 DefineStmt_1 : CREATE AGGREGATE func_name aggr_args definition DefineStmt_2 : CREATE AGGREGATE func_name old_aggr_definition DefineStmt_3 : CREATE OPERATOR any_operator definition DefineStmt_4 : CREATE TYPE any_name definition DefineStmt_5 : CREATE TYPE any_name DefineStmt_6 : CREATE TYPE any_name AS OptTableFuncElementList DefineStmt_7 : CREATE TYPE any_name AS ENUM opt_enum_val_list DefineStmt_8 : CREATE TYPE any_name AS RANGE definition DefineStmt_9 : CREATE TEXT SEARCH PARSER any_name definition DefineStmt_10: CREATE TEXT SEARCH DICTIONARY any_name definition DefineStmt_11: CREATE TEXT SEARCH TEMPLATE any_name definition DefineStmt_12: CREATE TEXT SEARCH CONFIGURATION any_name definition DefineStmt_13: CREATE COLLATION any_name definition DefineStmt_14: CREATE COLLATION any_name FROM any_name definition: def_list def_list: def_elem+ % / ~ ~ / def_elem : def_elem_1 | def_elem_2 def_elem_1: ColLabel def_arg def_elem_2: ColLabel # Note: any simple identifier will be returned as a type name! def_arg : def_arg_1 | def_arg_2 | def_arg_3 | def_arg_4 | def_arg_5 def_arg_1: func_type def_arg_2: reserved_keyword def_arg_3: qual_all_Op def_arg_4: NumericOnly def_arg_5: Sconst aggr_args : aggr_args_1 | aggr_args_2 aggr_args_1: type_list aggr_args_2: old_aggr_definition: old_aggr_list old_aggr_list: old_aggr_elem+ % / ~ ~ / # Must use IDENT here to avoid reduce/reduce conflicts; fortunately none of # the item names needed in old aggregate definitions are likely to become # SQL keywords. old_aggr_elem: IDENT def_arg opt_enum_val_list: enum_val_list? enum_val_list: Sconst+ % / ~ ~ / ############################################################################# # # ALTER TYPE enumtype ADD ... # ############################################################################# AlterEnumStmt : AlterEnumStmt_1 | AlterEnumStmt_2 | AlterEnumStmt_3 AlterEnumStmt_1: ALTER TYPE any_name ADD VALUE Sconst AlterEnumStmt_2: ALTER TYPE any_name ADD VALUE Sconst BEFORE Sconst AlterEnumStmt_3: ALTER TYPE any_name ADD VALUE Sconst AFTER Sconst ############################################################################# # # QUERIES : # CREATE OPERATOR CLASS ... # CREATE OPERATOR FAMILY ... # ALTER OPERATOR FAMILY ... # DROP OPERATOR CLASS ... # DROP OPERATOR FAMILY ... # ############################################################################# CreateOpClassStmt: CREATE OPERATOR CLASS any_name opt_default FOR TYPE Typename USING access_method opt_opfamily AS opclass_item_list opclass_item_list: opclass_item+ % / ~ ~ / opclass_item : opclass_item_1 | opclass_item_2 | opclass_item_3 | opclass_item_4 | opclass_item_5 opclass_item_1: OPERATOR Iconst any_operator opclass_purpose opt_recheck opclass_item_2: OPERATOR Iconst any_operator oper_argtypes opclass_purpose opt_recheck opclass_item_3: FUNCTION Iconst func_name func_args opclass_item_4: FUNCTION Iconst type_list func_name func_args opclass_item_5: STORAGE Typename opt_default: DEFAULT? opt_opfamily: ( FAMILY any_name )? opclass_purpose : ( opclass_purpose_1 | opclass_purpose_2 )? opclass_purpose_1: FOR SEARCH opclass_purpose_2: FOR ORDER BY any_name # RECHECK no longer does anything in opclass definitions, # but we still accept it to ease porting of old database # dumps. opt_recheck: RECHECK? CreateOpFamilyStmt: CREATE OPERATOR FAMILY any_name USING access_method AlterOpFamilyStmt : AlterOpFamilyStmt_1 | AlterOpFamilyStmt_2 AlterOpFamilyStmt_1: ALTER OPERATOR FAMILY any_name USING access_method ADD opclass_item_list AlterOpFamilyStmt_2: ALTER OPERATOR FAMILY any_name USING access_method DROP opclass_drop_list opclass_drop_list: opclass_drop+ % / ~ ~ / opclass_drop : opclass_drop_1 | opclass_drop_2 opclass_drop_1: OPERATOR Iconst type_list opclass_drop_2: FUNCTION Iconst type_list DropOpClassStmt : DropOpClassStmt_1 | DropOpClassStmt_2 DropOpClassStmt_1: DROP OPERATOR CLASS any_name USING access_method opt_drop_behavior DropOpClassStmt_2: DROP OPERATOR CLASS IF EXISTS any_name USING access_method opt_drop_behavior DropOpFamilyStmt : DropOpFamilyStmt_1 | DropOpFamilyStmt_2 DropOpFamilyStmt_1: DROP OPERATOR FAMILY any_name USING access_method opt_drop_behavior DropOpFamilyStmt_2: DROP OPERATOR FAMILY IF EXISTS any_name USING access_method opt_drop_behavior ############################################################################# # # QUERY: # # DROP OWNED BY username [, username ...] [ RESTRICT | CASCADE ] # REASSIGN OWNED BY username [, username ...] TO username # ############################################################################# DropOwnedStmt: DROP OWNED BY name_list opt_drop_behavior ReassignOwnedStmt: REASSIGN OWNED BY name_list TO name ############################################################################# # # QUERY: # # DROP itemtype [ IF EXISTS ] itemname [, itemname ...] # [ RESTRICT | CASCADE ] # ############################################################################# DropStmt : DropStmt_1 | DropStmt_2 | DropStmt_3 | DropStmt_4 DropStmt_1: DROP drop_type IF EXISTS any_name_list opt_drop_behavior DropStmt_2: DROP drop_type any_name_list opt_drop_behavior DropStmt_3: DROP INDEX CONCURRENTLY any_name_list opt_drop_behavior DropStmt_4: DROP INDEX CONCURRENTLY IF EXISTS any_name_list opt_drop_behavior drop_type : drop_type_1 | drop_type_2 | drop_type_3 | drop_type_4 | drop_type_5 | drop_type_6 | drop_type_7 | drop_type_8 | drop_type_9 | drop_type_10 | drop_type_11 | drop_type_12 | drop_type_13 | drop_type_14 | drop_type_15 | drop_type_16 drop_type_1 : TABLE drop_type_2 : SEQUENCE drop_type_3 : VIEW drop_type_4 : INDEX drop_type_5 : FOREIGN TABLE drop_type_6 : EVENT_TRIGGER drop_type_7 : TYPE drop_type_8 : DOMAIN drop_type_9 : COLLATION drop_type_10: CONVERSION drop_type_11: SCHEMA drop_type_12: EXTENSION drop_type_13: TEXT SEARCH PARSER drop_type_14: TEXT SEARCH DICTIONARY drop_type_15: TEXT SEARCH TEMPLATE drop_type_16: TEXT SEARCH CONFIGURATION any_name_list: any_name+ % / ~ ~ / any_name : any_name_1 | any_name_2 any_name_1: ColId any_name_2: ColId attrs attrs : attrs_1 | attrs_2 attrs_1: attr_name attrs_2: 2+ % / ~ ~ / ############################################################################# # # QUERY: # truncate table relname1, relname2, ... # ############################################################################# TruncateStmt: TRUNCATE opt_table relation_expr_list opt_restart_seqs opt_drop_behavior opt_restart_seqs : ( opt_restart_seqs_1 | opt_restart_seqs_2 )? opt_restart_seqs_1: CONTINUE IDENTITY opt_restart_seqs_2: RESTART IDENTITY ############################################################################# # # The COMMENT ON statement can take different forms based upon the type of # the object associated with the comment. The form of the statement is: # # COMMENT ON [ [ DATABASE | DOMAIN | INDEX | SEQUENCE | TABLE | TYPE | VIEW | # COLLATION | CONVERSION | LANGUAGE | OPERATOR CLASS | # LARGE OBJECT | CAST | COLUMN | SCHEMA | TABLESPACE | # EXTENSION | ROLE | TEXT SEARCH PARSER | # TEXT SEARCH DICTIONARY | TEXT SEARCH TEMPLATE | # TEXT SEARCH CONFIGURATION | FOREIGN TABLE | # FOREIGN DATA WRAPPER | SERVER | EVENT TRIGGER ] | # AGGREGATE (arg1, ...) | # FUNCTION (arg1, arg2, ...) | # OPERATOR (leftoperand_typ, rightoperand_typ) | # TRIGGER ON | # CONSTRAINT ON | # RULE ON ] # IS 'text' # ############################################################################# CommentStmt : CommentStmt_1 | CommentStmt_2 | CommentStmt_3 | CommentStmt_4 | CommentStmt_5 | CommentStmt_6 | CommentStmt_7 | CommentStmt_8 | CommentStmt_9 | CommentStmt_10 | CommentStmt_11 | CommentStmt_12 | CommentStmt_13 | CommentStmt_14 | CommentStmt_15 | CommentStmt_16 | CommentStmt_17 CommentStmt_1 : COMMENT ON comment_type any_name IS comment_text CommentStmt_2 : COMMENT ON AGGREGATE func_name aggr_args IS comment_text CommentStmt_3 : COMMENT ON FUNCTION func_name func_args IS comment_text CommentStmt_4 : COMMENT ON OPERATOR any_operator oper_argtypes IS comment_text CommentStmt_5 : COMMENT ON CONSTRAINT name ON any_name IS comment_text CommentStmt_6 : COMMENT ON RULE name ON any_name IS comment_text CommentStmt_7 : COMMENT ON RULE name IS comment_text CommentStmt_8 : COMMENT ON TRIGGER name ON any_name IS comment_text CommentStmt_9 : COMMENT ON OPERATOR CLASS any_name USING access_method IS comment_text CommentStmt_10: COMMENT ON OPERATOR FAMILY any_name USING access_method IS comment_text CommentStmt_11: COMMENT ON LARGE OBJECT NumericOnly IS comment_text CommentStmt_12: COMMENT ON CAST Typename AS Typename IS comment_text CommentStmt_13: COMMENT ON opt_procedural LANGUAGE any_name IS comment_text CommentStmt_14: COMMENT ON TEXT SEARCH PARSER any_name IS comment_text CommentStmt_15: COMMENT ON TEXT SEARCH DICTIONARY any_name IS comment_text CommentStmt_16: COMMENT ON TEXT SEARCH TEMPLATE any_name IS comment_text CommentStmt_17: COMMENT ON TEXT SEARCH CONFIGURATION any_name IS comment_text comment_type : comment_type_1 | comment_type_2 | comment_type_3 | comment_type_4 | comment_type_5 | comment_type_6 | comment_type_7 | comment_type_8 | comment_type_9 | comment_type_10 | comment_type_11 | comment_type_12 | comment_type_13 | comment_type_14 | comment_type_15 | comment_type_16 | comment_type_17 | comment_type_18 comment_type_1 : COLUMN comment_type_2 : DATABASE comment_type_3 : SCHEMA comment_type_4 : INDEX comment_type_5 : SEQUENCE comment_type_6 : TABLE comment_type_7 : DOMAIN comment_type_8 : TYPE comment_type_9 : VIEW comment_type_10: COLLATION comment_type_11: CONVERSION comment_type_12: TABLESPACE comment_type_13: EXTENSION comment_type_14: ROLE comment_type_15: FOREIGN TABLE comment_type_16: SERVER comment_type_17: FOREIGN DATA WRAPPER comment_type_18: EVENT TRIGGER comment_text : comment_text_1 | comment_text_2 comment_text_1: Sconst comment_text_2: NULL ############################################################################# # # SECURITY LABEL [FOR ] ON IS