pax_global_header00006660000000000000000000000064144050477410014520gustar00rootroot0000000000000052 comment=c79d6d4689ebda452315adc934da3cf51df30ed9 relation-algebra-v.1.7.9/000077500000000000000000000000001440504774100152125ustar00rootroot00000000000000relation-algebra-v.1.7.9/.github/000077500000000000000000000000001440504774100165525ustar00rootroot00000000000000relation-algebra-v.1.7.9/.github/workflows/000077500000000000000000000000001440504774100206075ustar00rootroot00000000000000relation-algebra-v.1.7.9/.github/workflows/build-relation-algebra.yml000066400000000000000000000012101440504774100256310ustar00rootroot00000000000000name: CI on: push: branches: - master - v8.16 pull_request: branches: - '**' jobs: build: runs-on: ubuntu-latest # container actions require GNU/Linux strategy: matrix: image: - 'coqorg/coq:dev' - 'coqorg/coq:8.16' fail-fast: false steps: - uses: actions/checkout@v2 - uses: coq-community/docker-coq-action@v1 with: opam_file: 'coq-relation-algebra.opam' custom_image: ${{ matrix.image }} # See also: # https://github.com/coq-community/docker-coq-action#readme # https://github.com/erikmd/docker-coq-github-action-demo relation-algebra-v.1.7.9/CHANGELOG000066400000000000000000000045671440504774100164400ustar00rootroot00000000000000== RelationAlgebra 1.7.9 (2023, March 17th) == - compatibility with Coq 8.17 == RelationAlgebra 1.7.8 (2022, September 8th) == - compatibility with Coq 8.16 - minor fixes, including w.r.t. licensing issues == RelationAlgebra 1.7.7 (2022, March 22nd) == - compatibility with Coq 8.15 - model of setoid-preserving heterogeneous relations == RelationAlgebra 1.7.6 (2021, November 11th) == - compatibility with Coq 8.14 - support for idempotency of \cup and \cap when using coq-aac-tactics == RelationAlgebra 1.7.5 (2020, December 29th) == - compatibility with Coq 8.13 == RelationAlgebra 1.7.4 (2020, October 20th) == - compatibility with Coq 8.11 & 8.12 - optional dependency on coq-aac-tactics (module [rewriting_aac]) - a few compilation warnings removed == RelationAlgebra 1.7.3 (2020, February 27th) == - compatibility with Coq 8.11 == RelationAlgebra 1.7.2 (2020, February 27th) == - compatibility with Coq 8.10 == RelationAlgebra 1.7.1 (2019, February 8th) == - compatibility with Coq 8.9 == RelationAlgebra 1.7 (2018, December 17th) == - compatibility with Coq 8.8.2 (intermediate versions for Coq 8.6 and 8.7 can be found on github) - unicode notations (i.e., "x ≡ y", "x ≦ y", "x⋅y") - support heterogeneous relations on large types (outside [Set]) - [fhrel]: heterogeneous relations between finite types (available if coq-mathcomp-ssreflect is present [optional dependency]) == RelationAlgebra 1.6 (2016, May 10th) == - minor fix to make it compile with Coq 8.5pl1 == RelationAlgebra 1.5 (2016, March 4th) == - few additions to the [relalg] and [lattice] modules == RelationAlgebra 1.4 (2016, February 9th) == - distribution through opam, for Coq 8.5.0 - new module [relalg] for standard relation algebra facts and definitions == RelationAlgebra 1.3 (2015, July 27th) == - distribution through opam, for Coq 8.4pl6 - fixing some notation levels in [sups], [sums] == RelationAlgebra 1.2 (2013, February 25th) == - make it compile with Coq 8.4pl4 == RelationAlgebra 1.1 (2013, February 15th) == - [paterson]: proof of equivalence of two flowchart schemes, due to Paterson - [compiler_opts]: added the two missing compiler optimisations - [imp]: nicer presentation, using a partially shallow embedding - [move]: tools to easily move subterms inside a product - [lset]: fixed a typo == RelationAlgebra 1.0 (2012, December 16th) == First release of the library relation-algebra-v.1.7.9/COPYING000066400000000000000000001045131440504774100162510ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . relation-algebra-v.1.7.9/COPYING.LESSER000066400000000000000000000167431440504774100172540ustar00rootroot00000000000000 GNU LESSER GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below. 0. Additional Definitions. As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License. "The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below. An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library. A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the "Linked Version". The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version. The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work. 1. Exception to Section 3 of the GNU GPL. You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL. 2. Conveying Modified Versions. If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version: a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy. 3. Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. 4. Combined Works. You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following: a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the Combined Work with a copy of the GNU GPL and this license document. c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document. d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.) 5. Combined Libraries. You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License. b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 6. Revised Versions of the GNU Lesser General Public License. The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation. If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library. relation-algebra-v.1.7.9/Makefile000066400000000000000000000027771440504774100166670ustar00rootroot00000000000000-include Makefile.coq Makefile.coq: $(COQBIN)coq_makefile -f _CoqProject -o Makefile.coq cleanall:: clean rm -f Makefile.coq* depgraph.* */*.d depgraph.dot:: @echo building dependency graph @echo "digraph {" > $@ @ls -1 theories/*.v | grep -v theories/all |sed 's#theories/\(.*\)\.v#\1 [URL=".\/html\/RelationAlgebra.\1.html"];#g' >> $@ @coqdep -f _CoqProject -dyndep no -m src/META.coq-relation-algebra \ | grep vio \ | sed 's#: [^ ]*\.v #->{#g' \ | sed 's#src/META.coq-relation-algebra[ ]*##g' \ | sed 's/\.vio//g' \ | sed 's/[ ]*$$/};/g' \ | sed 's/ /;/g' \ | sed 's#theories/##g' \ | sed 's#examples/.*##g' \ | sed 's#all.*##g' \ >> $@; @echo "}" >> $@ %.svg: %.dot tred $< | dot -Tsvg -o $@ ## used to use [coqdep -dumpgraph] as follows # coqdep theories/*.v -dumpgraph depgraph.dot 1>/dev/null 2>/dev/null # sed -i 's/\[label=\"\([^"]*\)\"]/[label="\1";URL=".\/html\/RelationAlgebra.\1.html"]/g' depgraph.dot # dot depgraph.dot -Tsvg -o depgraph.svg enable-ssr:: sed -i '/theories\/fhrel\.v/d' _CoqProject echo "theories/fhrel.v" >>_CoqProject $(COQBIN)coq_makefile -f _CoqProject -o Makefile.coq disable-ssr:: sed -i '/theories\/fhrel\.v/d' _CoqProject $(COQBIN)coq_makefile -f _CoqProject -o Makefile.coq enable-aac:: sed -i '/theories\/rewriting_aac\.v/d' _CoqProject echo "theories/rewriting_aac.v" >>_CoqProject $(COQBIN)coq_makefile -f _CoqProject -o Makefile.coq disable-aac:: sed -i '/theories\/rewriting_aac\.v/d' _CoqProject $(COQBIN)coq_makefile -f _CoqProject -o Makefile.coq relation-algebra-v.1.7.9/README.md000066400000000000000000000103761440504774100165000ustar00rootroot00000000000000 # Relation Algebra for Coq Webpage of the project: http://perso.ens-lyon.fr/damien.pous/ra ## DESCRIPTION This Coq development is a modular library about relation algebra: those algebras admitting heterogeneous binary relations as a model, ranging from partially ordered monoid to residuated Kleene allegories and Kleene algebra with tests (KAT). This library presents this large family of algebraic theories in a modular way; it includes several high-level reflexive tactics: - [kat], which decides the (in)equational theory of KAT; - [hkat], which decides the Hoare (in)equational theory of KAT (i.e., KAT with Hoare hypotheses); - [ka], which decides the (in)equational theory of KA; - [ra], a normalisation based partial decision procedure for Relation algebra; - [ra_normalise], the underlying normalisation tactic. The tactic for Kleene algebra with tests is obtained by reflection, using a simple bisimulation-based algorithm working on the appropriate automaton of partial derivatives, for the generalised regular expressions corresponding to KAT. Combined with a formalisation of KA completeness, and then of KAT completeness on top of it, this provides entirely axiom-free decision procedures for all model of these theories (including relations, languages, traces, min-max and max-plus algebras, etc...). Algebraic structures are generalised in a categorical way: composition is typed like in categories, allowing us to reach "heterogeneous" models like rectangular matrices or heterogeneous binary relations, where most operations are partial. We exploit untyping theorems to avoid polluting decision algorithms with these additional typing constraints. ## APPLICATIONS We give a few examples of applications of this library to program verification: - a formalisation of a paper by Dexter Kozen and Maria-Cristina Patron. showing how to certify compiler optimisations using KAT. - a formalisation of the IMP programming language, followed by: 1/ some simple program equivalences that become straightforward to prove using our tactics; 2/ a formalisation of Hoare logic rules for partial correctness in the above language: all rules except the assignation one are proved by a single call to the hkat tactic. - a proof of the equivalence of two flowchart schemes, due to Paterson. The informal paper proof takes one page; Allegra Angus and Dexter Kozen gave a six pages long proof using KAT; our Coq proof is about 100 lines. ## INSTALLATION The easiest way to install this library is via OPAM. For the current stable release of Coq, the library can be installed directly through the `released` repository: ``` opam repo add coq-released https://coq.inria.fr/opam/released opam install coq-relation-algebra ``` Otherwise, use the provided opam file using `opam pin add .` (from the project directory) To compile manually use `./configure --enable-ssr` to enable building the finite types model (requires `coq-mathcomp-ssreflect`). Also use `--enable-aac` to enable building the bridge with AAC rewriting tactics (requires `coq-aac-tactics`). Then compile using `make` and install using `make install`. ## DOCUMENTATION Each module is documented, see index.html or http://perso.ens-lyon.fr/damien.pous/ra for: - a description of each module's role and dependencies - a list of the available user-end tactics - the coqdoc generated documentation. ## LICENSE This library is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library. If not, see . ## AUTHORS * Main author - Damien Pous (2012-), CNRS - LIP, ENS Lyon (UMR 5668), France * Additional authors - Christian Doczkal (2018-), CNRS - LIP, ENS Lyon (UMR 5668), France - Insa Stucke (2015-2016), Dpt of CS, University of Kiel, Germany - Coq development team (2013-) relation-algebra-v.1.7.9/TODO.txt000066400000000000000000000005421440504774100165210ustar00rootroot00000000000000- remove useless ra_ prefixes in src/ files - some tutorial file - dune support ? - exploit universe polymorphism (see file rel.v) - rebase lattice.ops on top of a setoid structure ? - complete lattice operations / theory ? - more efficient decision procedure for KAT - KAPT - tactics for Kleene algebra with top/converse - tactics for residuals relation-algebra-v.1.7.9/_CoqProject000066400000000000000000000036771440504774100173620ustar00rootroot00000000000000# Suppress harmless compiler warnings we can't do anything about. -arg -w -arg -projection-no-head-constant -arg -w -arg -notation-overridden -arg -w -arg -redundant-canonical-projection -arg -w -arg -future-coercion-class-field -R theories/ RelationAlgebra -I src/ theories/common.v theories/comparisons.v theories/positives.v theories/ordinal.v theories/denum.v theories/pair.v theories/powerfix.v theories/level.v theories/lattice.v theories/monoid.v theories/kleene.v theories/factors.v theories/kat.v theories/rewriting.v theories/move.v theories/lsyntax.v theories/syntax.v theories/normalisation.v theories/prop.v theories/boolean.v theories/rel.v theories/srel.v theories/lang.v theories/lset.v theories/sups.v theories/sums.v theories/matrix.v theories/matrix_ext.v theories/untyping.v theories/regex.v theories/rmx.v theories/bmx.v theories/dfa.v theories/nfa.v theories/ka_completeness.v theories/atoms.v theories/traces.v theories/glang.v theories/gregex.v theories/kat_completeness.v theories/ugregex.v theories/ugregex_dec.v theories/kat_untyping.v theories/kat_reification.v theories/kat_tac.v theories/relalg.v theories/all.v # examples of applications examples/compiler_opts.v examples/imp.v examples/paterson.v # plugin files ## shared utilities src/common.ml src/common.mli src/plugins.mlpack ## the various plugins are packed separately: they don't load the same Coq refs src/fold.ml src/fold.mli src/fold_g.mlg src/fold_g.mli src/packed_fold.mlpack src/mrewrite.ml src/mrewrite.mli src/mrewrite_g.mlg src/mrewrite_g.mli src/packed_mrewrite.mlpack src/reification.ml src/reification.mli src/reification_g.mlg src/reification_g.mli src/packed_reification.mlpack src/kat_dec.ml src/kat_dec.mli src/kat_reification.ml src/kat_reification.mli src/kat_reification_g.mlg src/kat_reification_g.mli src/packed_kat.mlpack src/META.coq-relation-algebra # optional theory files are set below, via [configure] script # theories/fhrel.v # theories/rewriting_aac.v relation-algebra-v.1.7.9/configure000077500000000000000000000010201440504774100171120ustar00rootroot00000000000000#!/bin/sh while [ "$1" != "" ]; do case "$1" in --enable-ssr) echo "theories/fhrel.v" >>_CoqProject ;; --disable-ssr) sed --in-place '/theories\/fhrel\.v/d' _CoqProject || echo "warning: sed failed to remove fhrel.v" ;; --enable-aac) echo "theories/rewriting_aac.v" >>_CoqProject ;; --disable-aac) sed --in-place '/theories\/rewriting_aac\.v/d' _CoqProject || echo "warning: sed failed to remove rewriting_aac.v" ;; *) echo "unknown option" ;; esac shift done relation-algebra-v.1.7.9/coq-relation-algebra.opam000066400000000000000000000025111440504774100220570ustar00rootroot00000000000000opam-version: "2.0" synopsis: "Relation Algebra and KAT in Coq" name: "coq-relation-algebra" maintainer: "Damien Pous " #TO SET BEFORE RELEASE version: "dev" homepage: "http://perso.ens-lyon.fr/damien.pous/ra/" dev-repo: "git+https://github.com/damien-pous/relation-algebra.git" bug-reports: "https://github.com/damien-pous/relation-algebra/issues" license: "LGPL-3.0-or-later" depends: [ "ocaml" #TO SET BEFORE RELEASE "coq" {>= "8.17"} # "coq" {>= "8.16" & < "8.17~"} #TO REMOVE BEFORE RELEASE (TO MAKE THEM OPTIONAL) "coq-mathcomp-ssreflect" "coq-aac-tactics" ] #TO SET BEFORE RELEASE # depopts: [ "coq-mathcomp-ssreflect" "coq-aac-tactics" ] build: [ ["sh" "-exc" "./configure --%{coq-mathcomp-ssreflect:enable}%-ssr --%{coq-aac-tactics:enable}%-aac"] [make "-j%{jobs}%"] ] install: [make "install"] tags: [ "keyword:relation algebra" "keyword:kleene algebra with tests" "keyword:kat" "keyword:allegories" "keyword:residuated structures" "keyword:automata" "keyword:regular expressions" "keyword:matrices" "category:Mathematics/Algebra" "logpath:RelationAlgebra" ] authors: [ "Damien Pous " "Christian Doczkal " ] ## CHECK BEFORE RELEASING: # CHANGELOG # README.md # webpage # above updates to the opam file in coq-opam-archive relation-algebra-v.1.7.9/description000066400000000000000000000011651440504774100174630ustar00rootroot00000000000000Name: RelationAlgebra Title: Relation Algebra, Kleene algegra, and KAT Url: http://perso.ens-lyon.fr/damien.pous/ra/ Description: A modular library about relation algebra, from idempotent semirings to residuated Kleene allegories, including a decision tactic for Kleene algebra with Tests (KAT). Keywords: relation algebra, Kleene algebra with tests, KAT, allegories, residuated structures, automata, regular expressions, matrices Category: Mathematics/Algebra Author: Damien Pous Email: Damien.Pous@ens-lyon.fr Homepage: http://perso.ens-lyon.fr/damien.pous/ Institution: CNRS, LIP - ENS Lyon - UMR 5668 Require: License: LGPL relation-algebra-v.1.7.9/examples/000077500000000000000000000000001440504774100170305ustar00rootroot00000000000000relation-algebra-v.1.7.9/examples/compiler_opts.v000066400000000000000000000140411440504774100220760ustar00rootroot00000000000000(** * compiler_opts: certifying compiler optimisations *) (** To illustrate some usage of the kat and hkat tactics, we formalise most of the compiler optimisations studied in the following paper: Dexter Kozen and Maria-Cristina Patron. Certification of compiler optimizations using Kleene algebra with tests. In Proc. 1st Int. Conf. Computational Logic (CL2000), Vol. 1861 of LNAI, pages 568-582, July 2000. Springer-Verlag. Most goals are solved with one single call to [kat] or [hkat]. The remaining cases correspond to situations where one has to exploit permutations of some Kleene variables (the Horn theory of KA with such commutation hypotheses is undecidable). *) From RelationAlgebra Require Import kat normalisation rewriting kat_tac. Set Implicit Arguments. (** in this module, we prefer the ";" notation for composition *) Infix " ;" := (dot _ _ _) (left associativity, at level 40): ra_terms. (** ** preliminary lemmas *) Lemma lemma_1 `{L: monoid.laws} `{Hl: BKA ≪ l} n (x y: X n n): x;y ≡ x;y;x -> x;y^* ≡ x;(y;x)^*. Proof. intro H. apply antisym. apply str_ind_r'. ka. rewrite str_dot, <-dotA. rewrite H at 2. ka. rewrite str_dot. apply str_ind_l'. ka. rewrite str_unfold_l. ra_normalise. rewrite <-H. ka. Qed. Lemma lemma_1' `{L: monoid.laws} `{Hl: BKA ≪ l} n (x y: X n n): y;x ≡ x;(y;x) -> y^*;x ≡ (x;y)^*;x. Proof. monoid.dual @lemma_1. Qed. Lemma lemma_1'' `{L: monoid.laws} `{Hl: BKA ≪ l} n (p q r: X n n): p;q ≡ q;p -> p;r ≡ r -> (p;q)^*;r ≡ q^*;r. Proof. intros Hpq Hpr. apply antisym. rewrite Hpq. apply str_ind_l'. ka. apply str_move in Hpq. mrewrite Hpq. mrewrite Hpr. ka. apply str_ind_l'. ka. rewrite <-str_snoc at 2. rewrite Hpq at 2 3. mrewrite Hpr. ka. Qed. Lemma lemma_2 `{L: laws} n b (q: X n n): [b];q ≡ q;[b] -> [b];q^* ≡ [b];(q;[b])^*. Proof. hkat. Qed. (** ** 3.1 Deadcode elimination *) Lemma opti_3_1_a `{L: laws} n (a: tst n) (p q: X n n): p ≡ p;[!a] -> p;([a];q+[!a]) ≡ p. Proof. hkat. Qed. Lemma opti_3_1_b `{L: laws} n (a: tst n) (p q: X n n): p ≡ p;[!a] -> p;([a];q)^*;[!a] ≡ p. Proof. hkat. Qed. (** ** 3.2 Common sub-expression elimination *) Lemma opti_3_2 `{L: laws} n (a b: tst n) (p q r w: X n n): p ≡ p;[a] -> [a];q ≡ [a];q;[b] -> [b];r ≡ [b] -> r ≡ w;r -> q;w ≡ w -> p;q ≡ p;r. Proof. intros Hpa Haq Hbr Hr Hw. rewrite Hr, <-Hw. mrewrite <-Hr. hkat. Qed. (** ** 3.3 Copy propagation *) Lemma opti_3_3 `{L: laws} n (a b: tst n) (p q r s v w: X n n): q ≡ q;[a] -> [a];r ≡ [a];r;[b] -> [b];s ≡ [b] -> s ≡ w;s -> r;w ≡ w -> s;v ≡ v;s -> q;v ≡ v -> p;q;r;v ≡ p;s;v. Proof. intros Hqa Har Hbs Hs Hw Hsv Hv. mrewrite Hsv. rewrite <-Hv at 2. mrewrite <-Hsv. rewrite Hs, <-Hw. mrewrite <-Hs. hkat. Qed. (** ** 3.4 Loop Hoisting *) Lemma opti_3_4i `{L: laws} n (a b: tst n) (p q r s u w: X n n): u;[b] ≡ u -> [b];u ≡ [b] -> [b];q ≡ q;[b] -> [b];s ≡ s;[b] -> [b];r ≡ r;[b] -> [a];w ≡ w;[a] -> u;r ≡ q -> u;w ≡ w -> q;s;w ≡ w;q;s -> p;u;([a];r;s)^*;[!a];w ≡ p;([a];q;s)^*;[!a];w. Proof. intros ? ? ? ? ? ? Hur Huw Hqsw. transitivity (p;u;[b];([a];[b];(u;r);s)^*;[!a];w). hkat. rewrite Hur. transitivity (p;u;([a];q;s)^*;w;[!a]). hkat. assert (E: w;([a];q;s)^* ≡ ([a];q;s)^*;w) by (apply str_move; mrewrite Hqsw; hkat). mrewrite <-E. mrewrite Huw. mrewrite E. hkat. Qed. Lemma opti_3_4ii `{L: laws} n (a: tst n) (p q u w: X n n): u ≡ w;u -> u;w ≡ w -> w;p;q ≡ p;q;w -> w;[a] ≡ [a];w -> ([a];u;p;q)^*;[!a];u ≡ ([a];p;q)^*;[!a];u. Proof. intros Hwu Huw Hpq Hw. rewrite Hwu at 1 2. transitivity (w;([a];u;(p;q;w))^*;[!a];u). hkat. rewrite <-Hpq. mrewrite Huw. mrewrite Hpq. rewrite <-lemma_1. rewrite (str_move (z:=[a];p;q)). rewrite Hwu at 2. hkat. mrewrite <-Hpq. hkat. mrewrite <-Hpq. rewrite <-Huw at 1. rewrite Hwu. mrewrite Huw. hkat. (* intros Hwu Huw Hpq Hw. rewrite Hwu at 1 2. transitivity (w;([a];u;(p;q;w))^*;[!a];u). hkat. *) (* rewrite <-Hpq. mrewrite Huw. mrewrite Hpq. transitivity ((w;[a];p;q)^*;[!a];(w;u)). hkat. *) (* rewrite <-3dotA, lemma_1'', <-Hwu. ra. mrewrite <-Hpq. hkat. rewrite <-Hwu at 1. hkat. *) Qed. (** ** 3.5 Induction variable elimination *) Lemma opti_3_5 `{L: laws} n (a b c: tst n) (p q r: X n n): q ≡ q;[b] -> [b] ≡ [b];q -> [c];r ≡ [c];r;[b] -> [b];p ≡ [b];p;[c] -> [c];q ≡ [c];r -> q;([a];p;q)^* ≡ q;([a];p;r)^*. Proof. intros Hq Hb Hr Hbp Hcq. assert (E: [b];p;q ≡ [b];p;r) by (rewrite Hbp; mrewrite Hcq; hkat). transitivity (q;([a];([b];p;q))^*;[b]). hkat. rewrite E. hkat. Qed. (** ** (3.6 and 3.7 are void) *) (** ** 3.8 Loop unrolling *) Lemma lemma_3 `{L: monoid.laws} `{Hl: BKA ≪ l} n (u: X n n): u^* ≡ (1+u);(u;u)^*. Proof. ka. Qed. Lemma opti_3_8 `{L: laws} n a (p: X n n): ([a];p)^*;[!a] ≡ ([a];p;([a];p+[!a]))^*;[!a]. Proof. kat. Qed. (** ** 3.9 Redundant loads and stores *) Lemma opti_3_9 `{L: laws} n a (p q: X n n): p ≡ p;[a] -> [a];q ≡ [a] -> p;q ≡ p. Proof. intros Hp Hq. hkat. Qed. (** ** 3.10 Array bounds check elimination *) Lemma opti_3_10'i `{L: laws} n (a b: tst n) (u v p q s: X n n): u;[a] ≡ u -> [a ⊓ b];p ≡ p;[a ⊓ b] -> [a];([b];p;q;v) ≡ ([b];p;q;v);[a] -> u;([b];p;([a ⊓ b];q+[!(a ⊓ b)];s);v)^*;[!b] ≡ u;([b];p;q;v)^*;[!b]. Proof. hkat. Qed. Lemma opti_3_10' `{L: laws} n (a b c: tst n) (u v p q s: X n n): a ⊓ b ≡ c -> u;[a] ≡ u -> [c];p ≡ p;[c] -> [a];([b];p;q;v) ≡ ([b];p;q;v);[a] -> u;([b];p;([c];q+[!c];s);v)^*;[!b] ≡ u;([b];p;q;v)^*;[!b]. Proof. hkat. Qed. (** ** 3.11 Introduction of sentinels *) Lemma opti_3_11 `{L: laws} n (a b c d: tst n) (u p q s t: X n n): u;[c] ≡ u -> [c];p ≡ p;[c] -> [c];q ≡ q;[c] -> p;[d] ≡ p -> [a];q;[d] ≡ [a];q -> c ⊓ d ⊓ b ≦ a -> u;p;([a ⊓ b];q)^*;[!(a ⊓ b)];([a];t+[!a];s) ≡ u;p;([b];q)^*;[!b];([a];t+[!a];s). Proof. hkat. Qed. (* note that it takes about 2s to solve this last one, thus illustrating the limits of our very basic algorithm *) relation-algebra-v.1.7.9/examples/imp.v000066400000000000000000000203371440504774100200110ustar00rootroot00000000000000(** * imp: a formalisation of the IMP programming language on top of KAT *) (* We formalise the IMP language (whose programs are also known as "while programs"). We give a big step semantics as an inductive predicate, and using KAT, and we show that the two versions actually coincide. We then use the [kat] tactic to prove some simple program equivalences, and to derive all rules of corresponding Hoare logic for partial correctness. *) From RelationAlgebra Require Import kat prop rel comparisons kat_tac. Section s. (** identifiers for memory locations *) Variable loc: Set. (** abstract state (or memory) *) Variable state: Set. (** updating the state *) Variable update: loc -> nat -> state -> state. (** * Definition of the languague *) (** programs *) Inductive prog := | skp | aff (x: loc) (e: state -> nat) | seq (p q: prog) | ite (t: dset state) (p q: prog) | whl (t: dset state) (p: prog). (** notations *) Declare Scope imp_scope. Bind Scope imp_scope with prog. Delimit Scope imp_scope with imp. Notation "x <- y" := (aff x y) (at level 90): imp_scope. Notation "p ;; q" := (seq p%imp q%imp) (left associativity, at level 101): imp_scope. Arguments ite _%ra _%imp _%imp. Arguments whl _%ra _%imp. (** * Big step semantics *) (** corresponding functional relation *) Notation upd x e := (frel (fun s => update x (e s) s)). (** ** using KAT expressions in the model of relations the semantics can then be given by induction on the program, using a simple fixpoint *) Fixpoint bstep (p: prog): hrel state state := match p with | skp => 1 | aff x e => upd x e | seq p q => bstep p ⋅ bstep q | ite b p q => [b] ⋅ bstep p + [!b] ⋅ bstep q | whl b p => ([b] ⋅ bstep p)^* ⋅ [!b] end. (** ** using an inductive predicate, as in standard textbooks *) Inductive bstep': prog -> hrel state state := | s_skp: forall s, bstep' skp s s | s_aff: forall x e s, bstep' (x <- e) s (update x (e s) s) | s_seq: forall p q s s' s'', bstep' p s s' -> bstep' q s' s'' -> bstep' (p ;; q) s s'' | s_ite_ff: forall (b: dset state) p q s s', b s = false -> bstep' q s s' -> bstep' (ite b p q) s s' | s_ite_tt: forall (b: dset state) p q s s', b s = true -> bstep' p s s' -> bstep' (ite b p q) s s' | s_whl_ff: forall (b: dset state) p s, b s = false -> bstep' (whl b p) s s | s_whl_tt: forall (b: dset state) p s s', b s = true -> bstep' (p ;; whl b p) s s' -> bstep' (whl b p) s s'. (** ** equivalence between the two definitions *) Lemma bstep_eq p: bstep' p ≡ bstep p. Proof. apply antisym. - intros s s'. induction 1. reflexivity. reflexivity. eexists; eassumption. right. eexists. split. reflexivity. simpl; now rewrite H. assumption. left. eexists. split. reflexivity. assumption. assumption. exists s. apply (str_refl ([b] ⋅ bstep p)). reflexivity. simpl. unfold hrel_inj. simpl. now rewrite H. destruct IHbstep' as [t ? [t' ? ?]]. exists t'. 2: assumption. apply (str_cons ([b] ⋅ bstep p)). exists t. 2: assumption. eexists; eauto. now split. - induction p; unfold bstep; fold bstep. intros ? ? <-. constructor. intros ? ? ->. constructor. intros ? ? [? H1 H2]. econstructor. apply IHp1, H1. apply IHp2, H2. intros ? ? [[? [<- H] H']|[? [<- H] H']]. apply s_ite_tt. assumption. apply IHp1, H'. apply s_ite_ff. now apply Bool.negb_true_iff. apply IHp2, H'. apply str_ind_l'. intros ? ? [<- H]. apply s_whl_ff. now apply Bool.negb_true_iff. rewrite <-dotA. intros s s'' [? [<- H] [s' H' H'']]. apply s_whl_tt. assumption. econstructor. apply IHp, H'. assumption. Qed. (** * Some program equivalences *) (** two programs are said to be equivalent if they have the same semantics *) Notation "p ~ q" := (bstep p ≡ bstep q) (at level 80). (** ad-hoc simplification tactic *) Ltac simp := unfold bstep; fold bstep. (** ** denesting nested loops *) Lemma two_loops b p: whl b (whl b p) ~ whl b p. Proof. simp. kat. Qed. (** ** folding a loop *) Lemma fold_loop b p: whl b (p ;; ite b p skp) ~ whl b p. Proof. simp. kat. Qed. (** ** eliminating deadcode *) Lemma dead_code b p q r: (whl b p ;; ite b q r) ~ (whl b p ;; r). Proof. simp. kat. Qed. Lemma dead_code' a b p q r: (whl (a ⊔ b) p ;; ite b q r) ~ (whl (a ⊔ b) p ;; r). Proof. simp. kat. Qed. (** * Reasoning about assignations *) (** (higher-order style) substitution in formulas and expressions *) Definition subst x v (A: dset state): dset state := fun s => A (update x (v s) s). Definition esubst x v (e: state -> nat): state -> nat := fun s => e (update x (v s) s). (** is [x] fresh in the expression e *) Definition fresh x (e: state -> nat) := forall v s, e (update x v s) = e s. Hypothesis update_twice: forall x i j s, update x j (update x i s) = update x j s. Hypothesis update_comm: forall x y i j s, x<>y -> update x i (update y j s) = update y j (update x i s). (** ** stacking assignations *) Lemma aff_stack x e f: (x <- e ;; x <- f) ~ (x <- esubst x e f). Proof. simp. rewrite frel_comp. apply frel_weq; intro s. apply update_twice. Qed. (** ** removing duplicates *) Lemma aff_idem x e: fresh x e -> (x <- e ;; x <- e) ~ (x <- e). Proof. intro. rewrite aff_stack. intros s s'. cbv. rewrite (H (e s)). tauto. Qed. (** ** commuting assignations *) Lemma aff_comm x y e f: x<>y -> fresh y e -> (x <- e ;; y <- f) ~ (y <- esubst x e f ;; x <- e). Proof. intros Hx Hy. simp. rewrite 2frel_comp. apply frel_weq; intro s. rewrite update_comm by congruence. now rewrite (Hy _). Qed. (** ** delaying choices *) (** in the above example, we cannot exploit KAT since this is just about assignations. In the following example, we show how to perform a mixed proof: once we assert that the test [t] somehow commutes with the assignation [x<-e], [hkat] can make use of this assumption to close the goal *) Lemma aff_ite x e t p q: (x <- e ;; ite t p q) ~ (ite (subst x e t) (x <- e ;; p) (x <- e ;; q)). Proof. simp. assert (H: upd x e ⋅ [t] ≡ [subst x e t] ⋅ upd x e) by (cbv; firstorder; subst; eauto). hkat. Qed. (** * Embedding Hoare logic for partial correctness *) (** Hoare triples for partial correctness can be expressed really easily using KAT: *) Notation Hoare A p B := ([A] ⋅ bstep p ⋅ [!B] ≦ 0). (** ** correspondence w.r.t. the standard interpretation of Hoare triples *) Lemma Hoare_eq A p B: Hoare A p B <-> forall s s', A s -> bstep p s s' -> B s'. Proof. split. - intros H s s' HA Hp. case_eq (B s'). reflexivity. intro HB. destruct (H s s'). exists s'. exists s. now split. assumption. split. reflexivity. simpl. now rewrite HB. - intros H s s' [? [? [<- HA] Hp] [-> HB]]. simpl in HB. rewrite (H _ _ HA Hp) in HB. discriminate. Qed. (** ** deriving Hoare logic rules using the [hkat] tactic *) (** Hoare triples are encoded as propositions of the shape [x ≦ 0] ; therefore, they can always be eliminated by [hkat], so that all rules of Hoare logic can be proved automatically (except for the assignation rule, of course) This idea come from the following paper: Dexter Kozen. On Hoare logic and Kleene algebra with tests. Trans. Computational Logic, 1(1):60-76, July 2000. The fact that we have an automatic tactic makes it trivial to formalise it. *) Lemma weakening (A A' B B': dset state) p: A' ≦ A -> Hoare A p B -> B ≦ B' -> Hoare A' p B'. Proof. hkat. Qed. Lemma rule_skp A: Hoare A skp A. Proof. simp. kat. Qed. Lemma rule_seq A B C p q: Hoare A p B -> Hoare B q C -> Hoare A (p;;q) C. Proof. simp. hkat. Qed. Lemma rule_ite A B t p q: Hoare (A ⊓ t) p B -> Hoare (A ⊓ !t) q B -> Hoare A (ite t p q) B. Proof. simp. hkat. Qed. Lemma rule_whl A t p: Hoare (A ⊓ t) p A -> Hoare A (whl t p) (A ⊓ neg t). Proof. simp. hkat. Qed. Lemma rule_aff x v (A: dset state): Hoare (subst x v A) (x <- v) A. Proof. rewrite Hoare_eq. intros s s' HA H. now inversion_clear H. Qed. Lemma wrong_rule_whl A t p: Hoare (A ⊓ !t) p A -> Hoare A (whl t p) (A ⊓ !t). Proof. simp. Fail hkat. Abort. Lemma rule_whl' (I A: dset state) t p: Hoare (I ⊓ t) p I -> I ⊓ !t ≦ A -> Hoare I (whl t p) A. Proof. eauto 3 using weakening, rule_whl. Qed. End s. relation-algebra-v.1.7.9/examples/paterson.v000066400000000000000000000456751440504774100210730ustar00rootroot00000000000000 (** * paterson: Equivalence of two flowchart schemes, due to Paterson cf. "Mathematical theory of computation", Manna, 1974 cf. "Kleene algebra with tests and program schematology", A. Angus and D. Kozen, 2001 *) From RelationAlgebra Require Import kat normalisation rewriting move kat_tac comparisons rel. Set Implicit Arguments. (** * Memory model *) (** we need only five memory locations: the [y_i] are temporary variables and [io] is used for input and output *) Inductive loc := y1 | y2 | y3 | y4 | io. Record state := { v1: nat; v2: nat; v3: nat; v4: nat; vio: nat }. (** getting the content of a memory cell *) Definition get x := match x with y1 => v1 | y2 => v2 | y3 => v3 | y4 => v4 | io => vio end. (** setting the content of a memory cell *) Definition set x n m := match x with | y1 => {| v1:=n; v2:=v2 m; v3:=v3 m; v4:=v4 m; vio:=vio m |} | y2 => {| v1:=v1 m; v2:=n; v3:=v3 m; v4:=v4 m; vio:=vio m |} | y3 => {| v1:=v1 m; v2:=v2 m; v3:=n; v4:=v4 m; vio:=vio m |} | y4 => {| v1:=v1 m; v2:=v2 m; v3:=v3 m; v4:=n; vio:=vio m |} | io => {| v1:=v1 m; v2:=v2 m; v3:=v3 m; v4:=v4 m; vio:=n |} end. (** basic properties of [get] and [set] *) Lemma get_set x v m: get x (set x v m) = v. Proof. now destruct x. Qed. Lemma get_set' x y v m: x<>y -> get x (set y v m) = get x m. Proof. destruct y; destruct x; simpl; trivial; congruence. Qed. Lemma set_set x v v' m: set x v (set x v' m) = set x v m. Proof. now destruct x. Qed. Lemma set_set' x y v v' m: x<>y -> set x v (set y v' m) = set y v' (set x v m). Proof. destruct y; destruct x; simpl; trivial; congruence. Qed. (** comparing locations *) Definition eqb x y := match x,y with | y1,y1 | y2,y2 | y3,y3 | y4,y4 | io,io => true | _,_ => false end. Lemma eqb_spec: forall x y, reflect (x=y) (eqb x y). Proof. destruct x; destruct y; simpl; try constructor; congruence. Qed. Lemma eqb_false x y: x<>y -> eqb x y = false. Proof. case eqb_spec; congruence. Qed. Lemma neqb_spec x y: negb (eqb x y) <-> x<>y. Proof. case eqb_spec; intuition; congruence. Qed. (** * Arithmetic and Boolean expressions *) Section s. (** we assume arbitrary functions to interpret symbols [f], [g], and [p] *) Variable ff: nat -> nat. Variable gg: nat -> nat -> nat. Variable pp: nat -> bool. (** we use a single inductive to represent Arithmetic and Boolean expressions: this allows us to share code about evaluation, free variables and so on, through polymorphism *) Inductive expr: Set -> Set := | e_var: loc -> expr nat | O: expr nat | f': expr nat -> expr nat | g': expr nat -> expr nat -> expr nat | p': expr nat -> expr bool | e_cap: expr bool -> expr bool -> expr bool | e_cup: expr bool -> expr bool -> expr bool | e_neg: expr bool -> expr bool | e_bot: expr bool | e_top: expr bool. Coercion e_var: loc >-> expr. (** ** evaluation of expressions *) Fixpoint eval A (e: expr A) (m: state): A := match e with | e_var x => get x m | O => 0%nat | f' e => ff (eval e m) | g' e f => gg (eval e m) (eval f m) | p' e => pp (eval e m) | e_cap e f => eval e m &&& eval f m | e_cup e f => eval e m ||| eval f m | e_neg e => negb (eval e m) | e_bot => false | e_top => true end. (** ** Free variables *) Fixpoint free y A (e: expr A): bool := match e with | e_var x => negb (eqb x y) | f' e | p' e | e_neg e => free y e | g' e f | e_cap e f | e_cup e f => free y e &&& free y f | _ => true end. (** ** Substitution *) Fixpoint subst x v A (f: expr A): expr A := match f with | e_var y => if eqb x y then v else e_var y | O => O | f' e => f' (subst x v e) | p' e => p' (subst x v e) | g' e f => g' (subst x v e) (subst x v f) | e_cup e f => e_cup (subst x v e) (subst x v f) | e_cap e f => e_cap (subst x v e) (subst x v f) | e_neg e => e_neg (subst x v e) | e_bot => e_bot | e_top => e_top end. Lemma subst_free x v A (e: expr A): free x e -> subst x v e = e. Proof. induction e; simpl; trivial. rewrite neqb_spec. case eqb_spec; congruence. intro. now rewrite IHe. rewrite landb_spec. intros [? ?]. now rewrite IHe1, IHe2. intro. now rewrite IHe. rewrite landb_spec. intros [? ?]. now rewrite IHe1, IHe2. rewrite landb_spec. intros [? ?]. now rewrite IHe1, IHe2. intro. now rewrite IHe. Qed. Lemma free_subst x e A (f: expr A): free x e -> free x (subst x e f). Proof. intro. induction f; simpl; rewrite ?IHf1; auto. case eqb_spec; trivial. simpl. rewrite neqb_spec. congruence. Qed. (** * Programs *) (** We just use KAT expressions, since any gflowchat scheme can be encoded as such an expression *) Inductive prog := | p_tst(t: expr bool) | p_aff(x: loc)(e: expr nat) | p_str(p: prog) | p_dot(p q: prog) | p_pls(p q: prog). (** * Bigstep semantics *) (** updating the memory, according to the assignment [x<-e] *) Definition update x e m := set x (eval e m) m. (** relational counterpart of this function *) Notation upd x e := (frel (update x e)). (** Bigstep semantics, as a fixpoint *) Fixpoint bstep (p: prog): hrel state state := match p with | p_tst p => [eval p: dset state] | p_aff x e => upd x e | p_str p => (bstep p)^* | p_dot p q => bstep p ⋅ bstep q | p_pls p q => bstep p + bstep q end. (** auxiliary lemma relating the evaluation of expressions, the assignments to the memory, and subsitution of expressions *) Lemma eval_update x v A (e: expr A) m: eval e (update x v m) = eval (subst x v e) m. Proof. induction e; simpl; try congruence. unfold update. case eqb_spec. intros <-. apply get_set. intro. apply get_set'. congruence. now rewrite IHe1, IHe2. now rewrite IHe1, IHe2. Qed. (** Now we make the set of programs a Kleene algebra with tests: we declare canonical structures for Boolean expressions (tests), programs (Kleene elements), and the natural injection of the former into the latter *) Canonical Structure expr_lattice_ops: lattice.ops := {| car := expr bool; leq := fun x y => eval x ≦ eval y; weq := fun x y => eval x ≡ eval y; cup := e_cup; cap := e_cap; bot := e_bot; top := e_top; neg := e_neg |}. Canonical Structure prog_lattice_ops: lattice.ops := {| car := prog; leq := fun x y => bstep x ≦ bstep y; weq := fun x y => bstep x ≡ bstep y; cup := p_pls; cap := assert_false p_pls; bot := p_tst e_bot; top := assert_false (p_tst e_bot); neg := assert_false id |}. Canonical Structure prog_monoid_ops: monoid.ops := {| ob := unit; mor n m := prog_lattice_ops; dot n m p := p_dot; one n := p_tst e_top; itr n := (fun x => p_dot x (p_str x)); str n := p_str; cnv n m := assert_false id; ldv n m p := assert_false (fun _ => id); rdv n m p := assert_false (fun _ => id) |}. Canonical Structure prog_kat_ops: kat.ops := {| kar := prog_monoid_ops; tst n := expr_lattice_ops; inj n := p_tst |}. Notation prog' := (prog_kat_ops tt tt). Notation test := (@tst prog_kat_ops tt). (** proving that the laws of KAT are satisfied is almost trivial, since the model faithfully embeds in the relational model *) Instance expr_lattice_laws: lattice.laws BL expr_lattice_ops. Proof. apply laws_of_injective_morphism with (@eval bool: expr bool -> dset state); trivial. split; simpl; tauto. Qed. Instance prog_monoid_laws: monoid.laws BKA prog_monoid_ops. Proof. apply laws_of_faithful_functor with (fun _ => state) (fun _ _: unit => bstep); trivial. split; simpl; try discriminate; try tauto. 2: firstorder. split; simpl; try discriminate; try tauto. firstorder auto with bool. Qed. Instance prog_lattice_laws: lattice.laws BKA prog_lattice_ops := lattice_laws tt tt. Instance prog_kat_laws: kat.laws prog_kat_ops. Proof. constructor; simpl; eauto with typeclass_instances. 2: tauto. split; try discriminate; try (simpl; tauto). intros x y H. apply inj_leq. intro m. apply H. intros x y H. apply inj_weq. intro m. apply H. intros _ x y. apply (inj_cup (X:=hrel_kat_ops)). intros _ x y. apply (inj_cap (X:=hrel_kat_ops)). Qed. (** ** variables read by a program *) (** [dont_read y p] holds if [p] never reads [y] *) Fixpoint dont_read y (p: prog'): bool := match p with | p_str p => dont_read y p | p_dot p q | p_pls p q => dont_read y p &&& dont_read y q | p_aff x e => free y e | p_tst t => free y t end. (** ** Additional notation *) Infix " ;" := (dot _ _ _) (left associativity, at level 40): ra_terms. Definition aff x e: prog' := p_aff x e. Notation "x <- e" := (aff x e) (at level 30). Notation del y := (y<-O). (** * Laws of schematic KAT *) Arguments hrel_monoid_ops : simpl never. Arguments hrel_lattice_ops : simpl never. (** (the numbering corresponds to Angus and Kozen's paper) *) Lemma eq_6 (x y: loc) (s t: expr nat): negb (eqb x y) &&& free y s -> x<-s;y<-t ≡ y<-subst x s t; x<-s. Proof. rewrite landb_spec, neqb_spec. intros [D H]. cbn. rewrite 2frel_comp. apply frel_weq. intro m. unfold update at 1 2 3. rewrite set_set' by congruence. f_equal. now rewrite eval_update, subst_free. unfold update. now rewrite <-eval_update. Qed. Lemma eq_7 (x y: loc) (s t: expr nat): negb (eqb x y) &&& free x s -> x<-s;y<-t ≡ x<-s;y<-subst x s t. Proof. rewrite landb_spec, neqb_spec. intros [D H]. cbn. rewrite 2frel_comp. apply frel_weq. intro m. unfold update at 1 3. f_equal. rewrite 2eval_update. symmetry. rewrite subst_free. reflexivity. now apply free_subst. Qed. Lemma eq_8 (x: loc) (s t: expr nat): x<-s;x<-t ≡ x<-subst x s t. Proof. cbn. rewrite frel_comp. apply frel_weq. intro m. unfold update. rewrite set_set. f_equal. apply eval_update. Qed. Lemma eq_9 (x: loc) (t: expr nat) (phi: test): [subst x t phi: test];x<-t ≡ x<-t;[phi]. Proof. Transparent hrel_lattice_ops. intros m m'. split. Opaque hrel_lattice_ops. intros [m0 [<- H] ->]. eexists. reflexivity. split; trivial. now rewrite eval_update. intros [m0 -> [<- H]]. eexists. 2: reflexivity. split; trivial. now rewrite <-eval_update. Qed. Lemma eq_6' (x y: loc) (s t: expr nat): free x t &&& negb (eqb x y) &&& free y s -> x<-s;y<-t ≡ y<-t; x<-s. Proof. rewrite landb_spec. intros [? ?]. now rewrite eq_6, subst_free. Qed. Lemma eq_9' (x: loc) (t: expr nat) (phi: test): free x phi -> [phi];x<-t ≡ x<-t;[phi]. Proof. intro. now rewrite <-eq_9, subst_free. Qed. Transparent hrel_lattice_ops. Arguments hrel_lattice_ops : simpl never. Lemma same_value (f: state -> state) (p: prog') (a b: test): bstep p ≡ frel f -> (forall m, eval a (f m) = eval b (f m)) -> p;[a ⊓ !b ⊔ !a ⊓ b] ≦ 0. Proof. intros Hp H. cbn. rewrite Hp. intros m m' [? -> [<- E]]. exfalso. rewrite lorb_spec, 2landb_spec, 2negb_spec, H in E. intuition congruence. Qed. (** * Garbage-collecting assignments to unread variables *) (** (i.e., Lemma 4.5 in Angus and Kozen's paper) *) Fixpoint gc y (p: prog'): prog' := match p with | p_str p => (gc y p)^* | p_tst _ => p | p_aff x e => if eqb x y then 1 else x<-e | p_dot p q => gc y p ; gc y q | p_pls p q => gc y p + gc y q end. Arguments prog_monoid_ops : simpl never. Arguments prog_lattice_ops : simpl never. Arguments prog_kat_ops : simpl never. Lemma gc_correct y p: dont_read y p -> gc y p; del y ≡ p; del y. Proof. intro H. transitivity (del y; gc y p). induction p; cbn. now apply eq_9'. case eqb_spec. ra. intro. apply eq_6'. now rewrite eqb_false. symmetry. apply str_move. symmetry. now auto. apply landb_spec in H as [? ?]. mrewrite IHp2. 2: assumption. now mrewrite IHp1. apply landb_spec in H as [? ?]. ra_normalise. now apply cup_weq; auto. symmetry. induction p; cbn. now apply eq_9'. case eqb_spec. intros <-. rewrite eq_8. ra. intro D. apply eq_6'. now rewrite eqb_false. symmetry. apply str_move. symmetry. now auto. apply landb_spec in H as [? ?]. mrewrite IHp2. 2: assumption. now mrewrite IHp1. apply landb_spec in H as [? ?]. ra_normalise. now apply cup_weq; auto. Qed. Ltac solve_rmov ::= first [ eassumption | symmetry; eassumption | eapply rmov_x_dot | apply rmov_x_pls | apply rmov_x_str | apply rmov_x_itr | apply rmov_x_cap | apply rmov_x_cup | apply rmov_x_neg | apply rmov_inj | apply rmov_x_1 | apply rmov_x_0 ]; match goal with |- ?x ≡ ?y => solve_rmov end. (** * Paterson's equivalence *) Theorem Paterson: let a1 := p' y1: test in let a2 := p' y2: test in let a3 := p' y3: test in let a4 := p' y4: test in let clr := del y1; del y2; del y3; del y4 in let x1 := y1<-io in let s1 := y1<-f' io in let s2 := y2<-f' io in let z1 := io<-y1; clr in let z2 := io<-y2; clr in let p11 := y1<-f' y1 in let p13 := y1<-f' y3 in let p22 := y2<-f' y2 in let p41 := y4<-f' y1 in let q222 := y2<-g' y2 y2 in let q214 := y2<-g' y1 y4 in let q211 := y2<-g' y1 y1 in let q311 := y3<-g' y1 y1 in let r11 := y1<-f' (f' y1) in let r12 := y1<-f' (f' y2) in let r13 := y1<-f' (f' y3) in let r22 := y2<-f' (f' y2) in let rhs := s2;[a2];q222;([!a2];r22;[a2];q222)^*;[a2];z2 in x1;p41;p11;q214;q311;([!a1];p11;q214;q311)^*;[a1];p13; (([!a4]+[a4];([!a2];p22)^*;[a2 ⊓ !a3];p41;p11);q214;q311;([!a1];p11;q214;q311)^*;[a1];p13)^*; [a4];([!a2];p22)^*;[a2 ⊓ a3];z2 ≡ rhs. Proof. intros. (** simple commutation hypotheses, to be exploited by [hkat] *) assert (a1p22: [a1];p22 ≡ p22;[a1]) by now apply eq_9'. assert (a1q214: [a1];q214 ≡ q214;[a1]) by now apply eq_9'. assert (a1q211: [a1];q211 ≡ q211;[a1]) by now apply eq_9'. assert (a1q311: [a1];q311 ≡ q311;[a1]) by now apply eq_9'. assert (a2p13: [a2];p13 ≡ p13;[a2]) by now apply eq_9'. assert (a2r12: [a2];r12 ≡ r12;[a2]) by now apply eq_9'. assert (a2r13: [a2];r13 ≡ r13;[a2]) by now apply eq_9'. assert (a3p13: [a3];p13 ≡ p13;[a3]) by now apply eq_9'. assert (a3p22: [a3];p22 ≡ p22;[a3]) by now apply eq_9'. assert (a3r12: [a3];r12 ≡ r12;[a3]) by now apply eq_9'. assert (a3r13: [a3];r13 ≡ r13;[a3]) by now apply eq_9'. assert (a4p13: [a4];p13 ≡ p13;[a4]) by now apply eq_9'. assert (a4p11: [a4];p11 ≡ p11;[a4]) by now apply eq_9'. assert (a4p22: [a4];p22 ≡ p22;[a4]) by now apply eq_9'. assert (a4q214: [a4];q214 ≡ q214;[a4]) by now apply eq_9'. assert (a4q211: [a4];q211 ≡ q211;[a4]) by now apply eq_9'. assert (a4q311: [a4];q311 ≡ q311;[a4]) by now apply eq_9'. assert (p41p11: p41;p11;[a1 ⊓ !a4 ⊔ !a1 ⊓ a4] ≦ 0). eapply same_value. apply frel_comp. reflexivity. assert (q211q311: q211;q311;[a2 ⊓ !a3 ⊔ !a2 ⊓ a3] ≦ 0). eapply same_value. apply frel_comp. reflexivity. assert (r12p22: r12;p22;p22;[a1 ⊓ !a2 ⊔ !a1 ⊓ a2] ≦ 0). eapply same_value. simpl bstep. rewrite 2frel_comp. reflexivity. reflexivity. (** this one cannot be used by [hkat], it's used by [rmov1] *) assert (p13p22: p13;p22 ≡ p22;p13) by now apply eq_6'. (** here starts the proof; the numbers in the comments refer to the equation numbers in Angus and Kozen's paper proof *) (** (19) *) transitivity ( x1;p41;p11;q214;q311; ([!a1 ⊓ !a4];p11;q214;q311 + [!a1 ⊓ a4];p11;q214;q311 + [ a1 ⊓ !a4];p13;[!a4];q214;q311 + [a1 ⊓ a4];p13;([!a2];p22)^*;[a2 ⊓ !a3];p41;p11;q214;q311)^*; [a1];p13;([!a2];p22)^*;[a2 ⊓ a3 ⊓ a4];z2). clear -a4p13 a4p22. hkat. do 2 rmov1 p13. (** (23+) *) transitivity ( x1;p41;p11;q214;q311; ([!a1 ⊓ !a4];p11;q214;q311 + [!a1 ⊓ a4];p11;q214;q311 + [ a1 ⊓ !a4];p13;[!a4];q214;q311 + [a1 ⊓ a4];p13;([!a2];p22)^*;[a2 ⊓ !a3];p41;p11;q214;q311)^*; ([!a2];p22)^*;[a1 ⊓ a2 ⊓ a3 ⊓ a4];(p13;z2)). clear -a1p22; hkat. setoid_replace (p13;z2) with z2 by (unfold z2, clr; mrewrite <-(gc_correct y1); [ simpl gc; kat | reflexivity ]). (** (24) *) transitivity (x1;p41;p11;q214;q311; ([a1 ⊓ a4];p13;([!a2];p22)^*;[a2 ⊓ !a3];p41;p11;q214;q311)^*; ([!a2];p22)^*;[a1 ⊓ a2 ⊓ a3 ⊓ a4];z2). clear -p41p11 a1p22 a1q214 a1q311 a4p11 a4p13 a4p22 a4q214 a4q311; hkat. (** big simplification w.r.t the paper proof here... *) (** (27) *) assert (p41p11q214: p41;p11;q214 ≡ p41;p11;q211). change (upd y4 (f' y1) ; upd y1 (f' y1) ; upd y2 (g' y1 y4) ≡ upd y4 (f' y1) ; upd y1 (f' y1) ; upd y2 (g' y1 y1)). now rewrite 3frel_comp. do 2 mrewrite p41p11q214. clear p41p11q214. (** (29) *) transitivity (x1;(p41;(p11;q211;q311;[a1];p13;([!a2];p22)^*;[a2 ⊓ !a3]))^*; p41;p11;q211;q311;([!a2];p22)^*;[a1 ⊓ a2 ⊓ a3];z2). clear -p41p11 a1p22 a1q211 a1q311 a4p22 a4q211 a4q311; hkat. (** (31) *) transitivity (x1;(p11;q211;q311;[a1];p13;([!a2];p22)^*;[a2 ⊓ !a3])^*; p11;q211;q311;([!a2];p22)^*;[a1 ⊓ a2 ⊓ a3];z2). unfold z2, clr. mrewrite <-(gc_correct y4). 2: reflexivity. simpl gc. kat. (** (32) *) rmov1 p13. transitivity ((x1;p11);(q211;q311;([!a2];p22)^*;[a1 ⊓ a2 ⊓ !a3];(p13;p11))^*; q211;q311;([!a2];p22)^*;[a1 ⊓ a2 ⊓ a3];z2). clear -a1p22 a2p13 a3p13; hkat. (** big simplification w.r.t the paper proof here... *) (** (33) *) setoid_replace (x1;p11) with s1 by apply eq_8. setoid_replace (p13;p11) with r13 by apply eq_8. (** (34) *) transitivity (s1;(q211;q311;(([!a2];p22)^*;([a1];r13));[a2 ⊓ !a3])^*; q211;q311;([!a2];p22)^*;[a1 ⊓ a2 ⊓ a3];z2). clear -a2r13 a3r13; hkat. setoid_replace (([!a2];p22)^*;([a1];r13)) with ([a1];r13;([!a2];p22)^*) by (assert (r13;p22 ≡ p22;r13) by (now apply eq_6'); rmov1 r13; clear -a1p22; hkat). transitivity (s1;([a1];(q211;q311;r13);([!a2];p22)^*;[a2 ⊓ !a3])^*; q211;q311;([!a2];p22)^*;[a1 ⊓ a2 ⊓ a3];z2). clear -a1q311 a1q211; hkat. (** (35) *) setoid_replace (q211;q311;r13) with (q211;q311;r12). 2: change (upd y2 (g' y1 y1) ; upd y3 (g' y1 y1) ; upd y1 (f' (f' y3)) ≡ upd y2 (g' y1 y1) ; upd y3 (g' y1 y1) ; upd y1 (f' (f' y2))); now rewrite 3frel_comp. (** (36) *) transitivity (s1;([a1];(q211;q311);[!a2];r12;([!a2];p22)^*;[a2])^*; (q211;q311);[a2];([!a2];p22)^*;[a1 ⊓ a2];z2). clear -a3p22 a3r12 q211q311. hkat. (** (37) *) transitivity (s1;([a1];q211;[!a2];r12;([!a2];p22)^*;[a2])^*; q211;[a2];([!a2];p22)^*;[a1 ⊓ a2];z2). unfold z2, clr. mrewrite <-(gc_correct y3). 2: reflexivity. simpl gc. kat. (** (38) *) transitivity (s1;[a1];q211;([!a2];r12;[a1];p22;[a2];q211 + [!a2];r12;[a1];p22;[!a2];(p22;q211))^*;[a2];z2). clear -a1p22 a1q211 a2r12 r12p22 a1p22. hkat. (** big simplification w.r.t the paper proof here... *) (** (43) *) assert (p22q211: p22;q211 ≡ q211) by apply eq_8. rewrite p22q211. transitivity (s1;[a1];q211;([!a2];r12;[a1];(p22;q211))^*;[a2];z2). kat. rewrite p22q211. clear p22q211. (** (44) *) unfold s1, a1, q211, r12, a2. rewrite <-eq_9. mrewrite eq_7. 2: reflexivity. mrewrite <-eq_9. mrewrite (eq_7 y1 y2 (f' (f' y2))). 2: reflexivity. unfold z2, clr. mrewrite <-(gc_correct y1). 2: reflexivity. unfold rhs, z2, clr, s2, a2. rewrite <-eq_9. unfold q222. mrewrite eq_7. 2: reflexivity. unfold r22. mrewrite <-eq_9. do 2 mrewrite eq_8. simpl gc. kat. (** (47) *) Qed. End s. relation-algebra-v.1.7.9/index.html000066400000000000000000000442721440504774100172200ustar00rootroot00000000000000 Relation Algebra and KAT in Coq

Damien Pous' home page

Relation Algebra and KAT in Coq

This Coq development is a modular library about relation algebra: those algebras admitting heterogeneous binary relations as a model, ranging from partially ordered monoid to residuated Kleene allegories and Kleene algebra with tests (KAT).

This library presents this large family of algebraic theories in a modular way; it includes several high-level reflexive tactics:

  • kat, which decides the (in)equational theory of KAT;
  • hkat, which decides the Hoare (in)equational theory of KAT (i.e., KAT with Hoare hypotheses);
  • ka, which decides the (in)equational theory of KA;
  • ra, a normalisation based partial decision procedure for Relation algebra;
  • ra_normalise, the underlying normalisation tactic.
  the cloud of relation algebra fragments

The tactic for Kleene algebra with tests is obtained by reflection, using a simple bisimulation-based algorithm working on the appropriate automaton of partial derivatives, for the generalised regular expressions corresponding to KAT. Combined with a formalisation of KA completeness, and then of KAT completeness on top of it, this provides entirely axiom-free decision procedures for all models of these theories (including relations, languages, traces, min-max and max-plus algebras, etc...).

Algebraic structures are generalised in a categorical way: composition is typed like in categories, allowing us to reach "heterogeneous" models like rectangular matrices or heterogeneous binary relations, where most operations are partial. We exploit untyping theorems to avoid polluting decision algorithms with these additional typing constraints.

Applications

We give a few examples of applications of this library to program verification:

  • a formalisation of a paper by Dexter Kozen and Maria-Cristina Patron. showing how to certify compiler optimisations using KAT. (See their paper, and the Coq module compiler_opts.)
  • a formalisation of the IMP programming language, followed by: 1/ some simple program equivalences that become straightforward to prove using our tactics; 2/ a formalisation of Hoare logic rules for partial correctness in the above language: all rules except the assignation one are proved by a single call to the hkat tactic. (See module imp)
  • a proof of the equivalence of two flowchart schemes, due to Paterson. The informal paper proof takes one page; Allegra Angus and Dexter Kozen gave a six pages long proof using KAT; our Coq proof is about 100 lines. (See Angus and Kozen's paper and the Coq module paterson.)

Download

This library is available through opam, under the name coq-relation-algebra, as well as on GitHub.

Related papers or talks

Documentation

Provided tactics

  • Decision tactics:
    • ka: equational theory of Kleene algebra;
    • kat: equational theory of Kleene algebra with tests;
    • hkat: Hoare theory of Kleene algebra with tests: it exploits hypotheses of the shape p==0, [a]==[b], [a]*p == p*[a], [a]*p == [p], [a]*p == [p], and all similar ones.
    • lattice: solves lattice (in)equations, using focused proof search (modular tactic: it works from preorders to bounded distributed lattices).
  • Incomplete decision tactics:
    • ra: tries to solve relation algebra (in)equations, by normalisation and comparison. This tactic is modular: it applies to all algebraic theories present in the library.
    • hlattice: tries to solve the Horn theory of lattices (modular tactic)
  • Normalisation tactics:
    • ra_normalise: normalise the current goal (modular tactic);
    • ra_simpl: normalise the current goal, without distributing composition over unions (modular tactic).
  • Rewriting tactics:
    • mrewrite: rewriting modulo associativity of composition (ad-hoc tactic); more AC rewriting tools are available using the AAC_tactics library.
  • Other tactics:
    • lattice.dual: prove goals by lattice duality;
    • monoid.dual: prove goals by categorical duality;
    • neg_switch: revert a goal to exploit the Boolean negation involution;
    • cnv_switch: revert a goal to exploit the converse involution.

Succinct description of each module

Each module is documented; links below point to the coqdoc generated documentation; see below for dependencies. The coqdoc table of contents is here.

  • Utilities
    • common: basic tactics and definitions used throughout the library
    • comparisons: types with decidable equality and ternary comparison function
    • positives: simple facts about binary positive numbers
    • ordinal: finite ordinals, finite sets of finite ordinals
    • pair: encoding pairs of ordinals as ordinals
    • powerfix: simple pseudo-fixpoint iterator
    • lset: sup-semilattice of finite sets represented as lists
  • Algebraic hierarchy
    • level: bitmasks allowing us to refer to an arbitrary point in the hierarchy
    • lattice: ``flat'' structures, from preorders to Boolean lattices
    • monoid: typed structures, from partially ordered monoids to residuated Kleene lattices
    • kat: Kleene algebra with tests
    • kleene: Basic facts about Kleene algebra
    • factors: Basic facts about residuated structures
    • relalg: Standard relation algebra facts and definitions
  • Models
    • prop: distributed lattice of Prop-ositions
    • boolean: Boolean trivial lattice, extended to a monoid.
    • rel: heterogeneous binary relations
    • srel: heterogeneous binary relations over setoids
    • fhrel: heterogeneous binary relations over finite types (requires ssreflect)
    • lang: word languages (untyped)
    • traces: trace languages (typed and untyped)
    • glang: guarded string languages (typed and untyped)
  • Free models
    • lsyntax: free lattice (Boolean expressions)
    • atoms: atoms of the free Boolean lattice over a finite set
    • syntax: free relation algebra
    • regex: regular expressions (untyped)
    • gregex: generalised regular expressions (typed - for KAT completeness)
    • ugregex: untyped generalised regular expressions (for KAT decision procedure)
    • kat_reification: tools and definitions for KAT reification
  • Relation algebra tools
    • normalisation: normalisation and semi-decision tactics for relation algebra
    • rewriting: rewriting modulo associativity of composition
    • rewriting_aac: bridge with AAC_tactics (requires AAC_tactics)
    • move: tools to easily move subterms inside a product
  • Linear algebra
    • sups: finite suprema/infima (a la bigop from ssreflect)
    • sums: finite sums
    • matrix: matrices over all structures supporting this construction
    • matrix_ext: additional operations and properties about matrices
    • rmx: matrices of regular expressions
    • bmx: matrices of Booleans
  • Untyping theorems
    • untyping: untyping theorem for structures below Kleene algebra with converse
    • kat_untyping: untyping theorem for guarded string languages (and thus, KAT)
  • Automata
    • dfa: deterministic finite state automata, decidability of language inclusion
    • nfa: matricial non-deterministic finite state automata, formal evaluation into regular expressions
    • ugregex_dec: decision of guarded string language equivalence of generalised regular expressions, using partial derivatives
  • Completeness
    • ka_completeness: (untyped) completeness of Kleene algebra
    • kat_completeness: (typed) completeness of Kleene algebra with tests
    • kat_tac: decision tactics for KA and KAT, elimination of Hoare hypotheses to get hkat
  • Applications to program verification
    • compiler_opts: verification of compiler optimisations in KAT
    • paterson: challenging equivalence of two flowchart schemes, due to Paterson
    • imp: formalisation of the IMP programming language, proving program equivalence using KAT, embedding of Hoare logic for partial correctness

Modules dependencies

Notes

This library started by a complete rewrite of the ATBR library we developed with Thomas Braibant. There was two main reasons for not reusing this code:

  • The way we designed the algebraic hierarchy in ATBR, using typeclasses, did not scale to the richer structures we present here (converse, residuals, allegories), was not modular enough, and did not allow us to define easily the Boolean lattice of tests needed in KAT. Here we follow a completely different approach, which seems to scale pretty well but required restarting from scratch.
  • The fact that we were proving the (algebraic) completeness of KA using the efficient algorithm we designed for deciding language equivalence of regular expressions was sub-optimal: as a consequence of this choice, our proofs were over-complicated. Instead, by using a different path here, we prove KA completeness in about 200 lines of definitions and 200 lines of proofs. This refactorisation was essential to reach KAT with a reasonable amount of efforts.

Authors

  • Damien Pous (2012-), CNRS - LIP, ENS Lyon (UMR 5668), France
  • Christian Doczkal (2018-), CNRS - LIP, ENS Lyon (UMR 5668), France
  • Insa Stucke (2015-2016), Dpt of CS, University of Kiel, Germany
  • Coq development team (2013-)

Acknowledgements

I am grateful to Thomas Braibant, with whom we developed the ancestor of this library, clearing the ground for this subsequent work.



http://perso.ens-lyon.fr/damien.pous/ra/
relation-algebra-v.1.7.9/ra.png000066400000000000000000000425001440504774100163230ustar00rootroot00000000000000PNG  IHDRTU pHYs N Nw#tEXtSoftwareGPL Ghostscript 9.06j 5 IDATxO֕zHv~ȓEZd\n^1f< y( ra-x@&oa@f!.=pi' vR6fXU^//=dU/=RE@cv 捿1N0ض-0Ms֎, z])0Ί  Rt ˲<+V܎O/ EWjJ* %:rcQH-FAa6ir@EG1ꬽ(7a|eY(4%"Z0l0 ۶' }$I,k "Ym͙?'*Q0x_ǑeP)xHМy[0 MӨ6ɀNY؏8 CqpOtds֩$8ycx!60 0YYh4P!ybueXLq0yJ4gR5鬽?BZǒ$%-B/Rj[L08hYϳ&jNdYϨVY W}0 uӉdR& Qb☦T%s8Ɉũz^ŧh$_tKlNj44DqC (M[zs,EAh6RJ AڅB뺳"WVnO߇VE_QUwLhZ9_ R)\-`22Ϥ*v4:iZ5COe6vt]6$]e&n"rחEQ11.<j_?I-fC.ZAl6gEq,k AhfTAu:V9u]I0CҒHvTmX;J3jՒ$ QrP}meJ)܋rķ.Q!dY<~CJ&} j!rq^NJ?AfEvNⱦiɓO//8^ceB{/O1a߮V}_cB{aI]d BFbFg{?oz(?4Mveh0G[ɷ1 äkv1kʃeYE9dbz8 yX4>GCGFװeYeܥ-99 P'9(u$%:u1vfvrZY(DL0 '2 ك4;29q/nSkc7(VtQ1EQK,."MIAt`{!}^ʱDX||Ǎ.W%~5fߟ&iڢup1 .IRC8l8qkú Lӟs:`Á *kB'$8`{ɉpbRob2XW08F@+_^sX|W`Δ%{b-M** KA J}3a5KQotr XAmK@_q\7,^)%n1ڸtfaܛroAk#ñ@'=hxAƁ ^l/C2ЃNNJpZC376Sp v Y56! {M5\g0:ح7U,K"a@t+R+%*O7kVLjt\gxEFgb7r-q;=-vgeɹf1* nԳs;B^mzަqϭ@a>cϘsaߵJ_ۃ-#} ð^7cZ*ʝ|ƔW[&1To Vn*V#7پŎ7~;$` 6{!ZZ@>%7n$ ёqۨV$SϪu}8sq2$I3o-s#,nyֶ{Yܪި*7ʵ hkZnb7}3T#UJ|J6n7OsZVI20rSި^3-Գ*wIf`JvJ9|溭iQO?sϖj[/̷Hk[OԀD/Dٓ,HNi?0V!Č[\&>ښi^(ʵ7k4O?Gc0c=KKIqƔSљOdޭ@m̒^%ʹg I1 ڑEa) Gytt換4ߵ[F&B7R0?jۧcǹwmgenCQA 2'jA}q\ à:!8NLHewR@&hYRzD!DV#ӡj!2M܀hEI8;NrN\ $uCM9Uj83au]P3W'-&Ͱ/RbPUw4$&iF]DoH* NAK rPѹ%1!pzcjzybK N){0CoDOCK)Ղ"DY`ZD:b3' ZfYEDJֶ93G$˲zǶo8pY7}s#>\\_AL_^?G}ݳ]w; =\T4M^ӱP0 ^ E"Td'C φWR_&zIKUPK U09x_պ5p8,f \\+pYB.U.H <0QJB՚ Dv9`֎DQEaFQYZK5N (;ntvEA(]c`iQEHmGxD9]+sEdct-]bqW%>9wm]gE)Mۂ@oC8l8kú Lӟs.YW \;*V8!);QlA'KBe;){`]C'%xΌzV?Yrnq8R[#@$1|k&xx7ֶW`u|*g\bJu{n]M% :k|`$@gqk U UxV?d DiDa)D{ÁN=! C 4|b9=l8b~҆NJpxnx[s&&pMjL0Zc`l YW!a[TԶoAWBp l 8)M>]`7EAkZ]7wju8(ba]Kj+2[W<-0h[NJqoq/6aÁW &9֗IsaYS C4!g"r*ApvSf+Vjn; 4Jg }*6A.9 [^5i8 ^Nz8VrIЊ f?c6.ϧHy_7xƎi 3a?3(IqEx\sWClp "FSxZH i8L [-}[9}P}ëz19vvν ';}S;WH G?d(lYϽ Q5 !71 ITW,?i-}m̵#|CQwq`2hрW/6 &sf 6xo<9yIMLBif}9]Vdć K24akuHj@t ]jڟG(!d0 _H{i~xfÏsMxf3<`w_0>G(}?)obtEzxRw-O*yji-"׊.8؁ Qnr] }&o~AOT|c'MI"SeML.$=ر0wyY~Iz,sֶ۪]̠˲n$}4Bt{v .NU*i+]U]\x[gm)OhJմ)U]bOz312M̄sӳӑnd]-w3t[ C7k. n7?`OVdj>'9?'Wҝ$&&njk!a|yeNXv35rriwqhzޮe}v=oU̍8kOJ2$1MoXAw&s~RV ZO]] Q(ob dAwFcEk^׆3X M7]r-r~Z`(Oj$1οuFnZH<\X†Ȱ_]b@b/nb0 ӉsiZ852Ȝ=} # HM8"؟!v: x52BaMӜ~%L9=T@wh2OX@Zo"k" HM!DGXjdbKs%xO? 14/'I}ϣFP,Z k|hɸ"3HM%w8$JFƀ,ZkHYB$Fb=vP#cW-`kHdw#@S1;vїV d kdwY#S1F; R Y-#Ғn$F`be8ءFF7H-(2r-IG 10ca'52:!8DזZ??HƼNŘpرC 92yӧモy1ᰓ ,ZT;@H-kPҩ; u9Vk֎L:omZk_] Oίa}hm[!Vsu5MSUlڗ1iڑ@jџva5wVK"i_h>nDnAPvmY\hjݞ#fݶ)$5MXVa<+7j(?o&"(rý0 {{խNϋcl# N*^HZC-˚#cVEr Z[-v~ I?On"^eݵU&]rQG%]0CA!t4iuu$v+NctEJwviZqei|7&bht]v(I"%kr*tEHF̺k&&WUfmȜ. kg̜tNdhk6u7D9ЁD1M]eAGKӴvA-˚154i^ '022kg[f4`O2srKuKl{aHB$l߾kT̠USr7@z\21iBͨEEYcFz(E)}~U.@tAŚugH\ȵK,(E|H 0#]sSNZ%-"].G\X&{[νߣAj1fi|mA XTL너0a;U=zi3֛X#ox]bquy!~ߌV _Xv%7jYtK~y0w'9 Kĵ9j۫za [9_[Uxʊ5eKHdY puD@˃0!ĉˆu ({VyR wk O7C`ZLy_KTP0 7~UR[ Bx톺nQ)).a,6}8Tк yՃ"kO_ ?;ʟ\˽wT'jmjrR9$V10Ĭ:o4AƎ`+phD*guA&ep`]=Np'[2[W<-qiX㰮+dxƭɵ~GΛB^yrxn9[p}b0V|G#l 8!?"E &aԆ~Ԃ(:"|MXp݀FM- fA<wQZmqvb*O㶡T?}8|3~%r'Dv_҉yja;Ύ8?Y刂Q@Ŭ}!bp%Py]A0r;B rSI1"?cYmj_$Z֧tVΫ`/9nf?6ȩܲz`#6C;v\=C,nkw-N㙅9з\sMG8S#<78mލ겮/ITP_Zcc"+_(++I2$ݮTݼ?TVmWVZ_0>N3T*ﯬWOVh|aMt(O2EA78n;C]jIXY/}{.Wҋ~Zzq;o6^^\4]`Kʵ {z=b wWVxkYH7t!\SxT*W+MeJMVKvj˦_ToTUq֪UQ\f-W]ޑQn-IY֧Qc1;E~Z_&MsȤcoAD7atLG?R?%u/$Exw؄x oS\S)cf3t>1OmpBLmyntn-]"oQ;- wOZˮ2|oxuhtAk5|ƂDbE(\SXxh6}_xyZ~SE| v֚C}nd\M-LV묪. 񕢈qv;i+1U]N$`Y~"׸OMIzcamUhDUeI0Mڈ.)i+]U]qӧnp~RWّoMY~BV\S,~dZǘ~>o7?;:̴hS{{!j>cf 5 ^c%"yUF:0HvЫ:AmMCO~O 2 BSшzVM6K a(ڝS~u0>_&Gw:Ǝ NIlr~RV ZO]]^Zzޮe}v=oVn.u5ͧ ܣ_u3%IiÊ޶? vLjros_js^XVQϩ4;ӝH 秹ڛ5WtoIl m-s;q$ׁEzk[ m~n}q|ߗeYUU4gQ!(hv+?S߉_~to;9߸Jzbp7lCIqοرcmN$ ,%Fl>cle⨪qs?n\]u`":=v/b@fK%뺦irQ4lT-d JRO$[Te"*RXbMqqG ï;o4O e۽kz}\c 놪.7'(Z<ۜ/9xrSimZ?笚ww;_G);Bz\·$qrHa⫤}8U*@J8;;u}5s.N" i_m[i+=E(`.}IB}cmNq`O2[U~{sϩ\eoeOsT;[ؓL>%WoT[/QϩVZU:.{|J6nl[?UYDmM0^&!TUUoWӄk&̋_ $=-=P 'jt1vwȡa?'iv-g;sۜwXkЊυC$(Rrm? U9K0{e,bzryE0 l:{6 2"Am)Z\H$3k !X\K ?P"@jA1jAᇡ ( ! (ZPaH$ Oq%R ?D) 3GՂGD\ZC0GԂGDZ#2lՂDZGe @jAI48~;$1ZPaBH19bc,jAᇉB"ZcjAIC"PZajA@"Z3oZPaH,8DmpՂӇD@H-"np_? ðhH diަzVUϪZϯso--8"(( ߚ3½PO Q/sϱߵǖ3׾keVZ]Ӹ4\A|> \ԽS7z!ҫzf>{N I> (=11[RϪbG(++*8̴hd9չ%zUmM#Пs{"2 BSшzVOX BϘ$pwCDdW帺`6^#gr&u W ݂ l %OslRެTwpǃS R= 1)ʵJ:|ۗK ^ڇt3x>%`LYwmw| b W؏]'υqivCʫ2F)0ƖY\j\)cuT8ggي/nɰ%fhTas&ΡB,P>f]~U͊5(3Զ &N W=9::|*7換4ߵmfro%.Jm^l #8N>Ű'#,kv J>`! yU~d5nެ3fVdÁu  |8)yF r"ϙ \.+p=PAD(71gUn_ݜDlC]=*~0$j$ 9l8``d~†4 0 U-PņDB,N3N1p'Uu *%X Uy Ba]<5 ?儢DB(l6i!%GE &1D27Ol(D}V`} k<kXTH-yl8 |Xh!  (4&dYn6Ziy ?@AԂ+H0bBZ ALR b! CjA'$У 1` aڶ(8aH+5#B-9gax 9״I7E,h!m}Y5M[%AjA, "m{'I8! 9 }_UUUU%$@̙`a88,:7ER b51}H-AkbZH! R@jA,. _ԂXh&& A0(|MR `P( 0u5Q:H-"f AkZ _eԂ 1^517ZD9Ԃ rY0(|M+PAkb! "&R E75hZD2Akb1! `Jkba! BaHkba!  Wv (ADH- ZrSU$ 2k~׶ߵRA̜o8ަ:{2C.WIASBsIu;95].>T( BSjmz4l]{啕Km#I_zy~׮\ToTUý[楗j,TQ\mc++=d 3\|B瞣sQ֏腨O-Qhߵ{g>[37z!jf=vBi-lZzV=G>%xȫaџ@:.e2#ik?l+ 1"R@jA, v@* @=HI`w{:IDAT|/=~p-K//F=QD! *vrI}IgO07}l>clk.ALR bPϪn($|Ǿt#C|4F*'zVMAoA, ڡؓLZWn*oYUA-zg槹zVųbGȫښs**l>B\ >bGHǥ9{*g"އeD)`LȲ IENDB`PNG  IHDRTU pHYs N Nw#tEXtSoftwareGPL Ghostscript 9.06j 5@IDATx w>6 OdO_|pn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [4-hn@s [6}Mw@IENDB`relation-algebra-v.1.7.9/src/000077500000000000000000000000001440504774100160015ustar00rootroot00000000000000relation-algebra-v.1.7.9/src/META.coq-relation-algebra000066400000000000000000000030661440504774100224460ustar00rootroot00000000000000package "common" ( requires = "coq-core.plugins.ltac" description = "shared utilities for relation algebra plugins" archive(byte) = "plugins.cma" archive(native) = "plugins.cmxa" plugin(byte) = "plugins.cma" plugin(native) = "plugins.cmxs" directory = "." ) package "reification" ( requires = "coq-core.plugins.ltac coq-relation-algebra.common" description = "reification plugin for relation algebra tactics" archive(byte) = "packed_reification.cma" archive(native) = "packed_reification.cmxa" plugin(byte) = "packed_reification.cma" plugin(native) = "packed_reification.cmxs" directory = "." ) package "fold" ( requires = "coq-core.plugins.ltac coq-relation-algebra.common" description = "folding plugin for relation algebra" archive(byte) = "packed_fold.cma" archive(native) = "packed_fold.cmxa" plugin(byte) = "packed_fold.cma" plugin(native) = "packed_fold.cmxs" directory = "." ) package "mrewrite" ( requires = "coq-core.plugins.ltac coq-relation-algebra.common" description = "rewriting modulo A plugin for relation algebra" archive(byte) = "packed_mrewrite.cma" archive(native) = "packed_mrewrite.cmxa" plugin(byte) = "packed_mrewrite.cma" plugin(native) = "packed_mrewrite.cmxs" directory = "." ) package "kat" ( requires = "coq-core.plugins.ltac coq-relation-algebra.common" description = "KAT reification plugin for relation algebra" archive(byte) = "packed_kat.cma" archive(native) = "packed_kat.cmxa" plugin(byte) = "packed_kat.cma" plugin(native) = "packed_kat.cmxs" directory = "." ) directory = "." relation-algebra-v.1.7.9/src/common.ml000066400000000000000000000302021440504774100176200ustar00rootroot00000000000000(** Small library used by all ML plugins we define in the library *) open Ltac_plugin open Tacexpr open EConstr open Names (* pick an element in an hashtbl *) let hashtbl_pick t = Hashtbl.fold (fun i x -> function None -> Some (i,x) | acc -> acc) t None (* recursive memoizer *) let memoize ff = let t = Hashtbl.create 27 in let rec f x = try Hashtbl.find t x with Not_found -> let y = ff f x in Hashtbl.add t x y; y in f (* timing a function call *) let time f x = let t0 = Sys.time() in let r = f x in Printf.printf "%f\n" (Sys.time()-.t0); r (* path to RelationAlgebra modules *) let ra_path = ["RelationAlgebra"] (* raise an error in Coq *) let error s = Printf.kprintf (fun s -> CErrors.user_err (Pp.str s)) ("[RelationAlgebra] "^^s) (* resolving a typeclass [cls] in a goal [gl] *) let tc_find env sigma cls = Typeclasses.resolve_one_typeclass env sigma cls (* creating new evars *) let new_evar env sigma ty = Evarutil.new_evar ~src:(None,Evar_kinds.GoalEvar) env sigma ty (* push a variable on the environment *) let push x t env = Termops.push_rel_assum (x,t) env (* are two terms convertible *) let convertible env sigma c1 c2 = Reductionops.is_conv env sigma c1 c2 let convertible' (env,sigma) = Reductionops.is_conv env sigma (* creating a name and a reference to that name *) let fresh_name env n = let vname = Tactics.fresh_id_in_env Id.Set.empty (Id.of_string n) env in Context.annotR vname, mkVar vname (* access to Coq constants *) let get_const dir s = lazy (EConstr.of_constr (UnivGen.constr_of_monomorphic_global (Global.env ()) (Coqlib.find_reference "RelationAlgebra.reification" dir s))) (* make an application using a lazy value *) let force_app f = fun x -> mkApp (Lazy.force f,x) (* build a partial application *) let partial_app n c ca = if n=0 then c else mkApp(c,Array.sub ca 0 n) (* creating OCaml functions from Coq ones *) let get_fun_1 d s = let v = get_const d s in fun x -> force_app v [|x|] let get_fun_2 d s = let v = get_const d s in fun x y -> force_app v [|x;y|] let get_fun_3 d s = let v = get_const d s in fun x y z -> force_app v [|x;y;z|] let get_fun_4 d s = let v = get_const d s in fun x y z t -> force_app v [|x;y;z;t|] let get_fun_5 d s = let v = get_const d s in fun x y z t u -> force_app v [|x;y;z;t;u|] let get_fun_6 d s = let v = get_const d s in fun x y z t u r -> force_app v [|x;y;z;t;u;r|] let get_fun_7 d s = let v = get_const d s in fun x y z t u r w -> force_app v [|x;y;z;t;u;r;w|] let get_fun_8 d s = let v = get_const d s in fun x y z t u r w p -> force_app v [|x;y;z;t;u;r;w;p|] let get_fun_9 d s = let v = get_const d s in fun x y z t u r w p q -> force_app v [|x;y;z;t;u;r;w;p;q|] let get_fun_10 d s = let v = get_const d s in fun x y z t u r w p q q1 -> force_app v [|x;y;z;t;u;r;w;p;q;q1|] let get_fun_11 d s = let v = get_const d s in fun x y z t u r w p q q1 q2 -> force_app v [|x;y;z;t;u;r;w;p;q;q1;q2|] let get_fun_12 d s = let v = get_const d s in fun x y z t u r w p q q1 q2 q3 -> force_app v [|x;y;z;t;u;r;w;p;q;q1;q2;q3|] let get_fun_13 d s = let v = get_const d s in fun x y z t u r w p q q1 q2 q3 q4 -> force_app v [|x;y;z;t;u;r;w;p;q;q1;q2;q3;q4|] let get_fun_14 d s = let v = get_const d s in fun x y z t u r w p q q1 q2 q3 q4 q5 -> force_app v [|x;y;z;t;u;r;w;p;q;q1;q2;q3;q4;q5|] let ltac_apply ist (f: Tacinterp.value) (arg : constr) = let open Geninterp in let f_ = Id.of_string "f" in let x_ = Id.of_string "x" in let arg = Tacinterp.Value.of_constr arg in let mkvar id = Locus.ArgVar (CAst.make id) in let ist = { ist with lfun = Id.Map.add f_ f (Id.Map.add x_ arg ist.lfun) } in Tacinterp.eval_tactic_ist ist (CAst.make (TacArg (TacCall (CAst.make (mkvar f_, [Reference (mkvar x_)]))))) (* Coq constants *) module Coq = struct let path = ["Coq"; "Init"; "Datatypes"] let true_ = get_const path "true" end (* RelationAlgebra.positives Coq module (plus standard positive numbers) *) module Pos = struct (* binary positive numbers *) let path = ["Coq" ; "Numbers"; "BinNums"] let t = get_const path "positive" let xH = get_const path "xH" let xI = get_fun_1 path "xI" let xO = get_fun_1 path "xO" (* a coq positive from an ocaml int *) let of_int = memoize (fun of_int -> function | 0 -> failwith "[RelationAlgebra] Pos.of_int applied to 0" | 1 -> Lazy.force xH | n -> (if n mod 2 = 0 then xO else xI) (of_int (n/2))) (* positive maps *) let path = ra_path@["positives"] let sigma = get_fun_1 path "sigma" let sigma_empty = get_fun_1 path "sigma_empty" let sigma_add = get_fun_4 path "sigma_add" let sigma_get = get_fun_3 path "sigma_get" end (* RelationAlgebra.level Coq module *) module Level = struct let path = ra_path@["level"] let t = get_const path "level" let has_bot = get_fun_1 path "has_bot" let has_top = get_fun_1 path "has_top" let has_cup = get_fun_1 path "has_cup" let has_cap = get_fun_1 path "has_cap" let has_neg = get_fun_1 path "has_neg" let has_str = get_fun_1 path "has_str" let has_cnv = get_fun_1 path "has_cnv" let has_div = get_fun_1 path "has_div" end type level = { has_cup: bool; has_bot: bool; has_cap: bool; has_top: bool; has_neg: bool; has_str: bool; has_cnv: bool; has_div: bool } let read_level env sigma l = let true_ = Lazy.force Coq.true_ in { has_cup = convertible env sigma true_ (Level.has_cup l); has_bot = convertible env sigma true_ (Level.has_bot l); has_cap = convertible env sigma true_ (Level.has_cap l); has_top = convertible env sigma true_ (Level.has_top l); has_neg = convertible env sigma true_ (Level.has_neg l); has_str = convertible env sigma true_ (Level.has_str l); has_cnv = convertible env sigma true_ (Level.has_cnv l); has_div = convertible env sigma true_ (Level.has_div l) } let max_level = { has_cup = true; has_bot = true; has_cap = true; has_top = true; has_neg = true; has_str = true; has_cnv = true; has_div = true } (* RelationAlgebra.lattice Coq module *) module Lattice = struct let path = ra_path@["lattice"] let leq_or_weq = get_const path "leq_or_weq" let leq1 = get_fun_1 path "leq" let leq = get_const path "leq" let weq1 = get_fun_1 path "weq" let weq = get_const path "weq" let car = get_fun_1 path "car" let cup1 = get_fun_1 path "cup" let cap1 = get_fun_1 path "cap" let neg1 = get_fun_1 path "neg" let cup = get_fun_3 path "cup" let cap = get_fun_3 path "cap" let neg = get_fun_2 path "neg" let bot = get_fun_1 path "bot" let top = get_fun_1 path "top" end (* RelationAlgebra.monoid Coq module *) module Monoid = struct let path = ra_path@["monoid"] let laws = get_fun_2 path "laws" let ops = get_const path "ops" let ob = get_fun_1 path "ob" let mor0 = get_const path "mor" let mor = get_fun_3 path "mor" let dot4 = get_fun_4 path "dot" let dot1 = get_fun_1 path "dot" let dot0 = get_const path "dot" let dot = get_fun_6 path "dot" let one = get_fun_2 path "one" let itr2 = get_fun_2 path "itr" let itr = get_fun_3 path "itr" let str2 = get_fun_2 path "str" let str = get_fun_3 path "str" let cnv3 = get_fun_3 path "cnv" let cnv = get_fun_4 path "cnv" let ldv1 = get_fun_1 path "ldv" let ldv4 = get_fun_4 path "ldv" let ldv = get_fun_6 path "ldv" let rdv1 = get_fun_1 path "rdv" let rdv4 = get_fun_4 path "rdv" let rdv = get_fun_6 path "rdv" end (* RelationAlgebra.lsyntax Coq module *) module LSyntax = struct let path = ra_path@["lsyntax"] let pp f p s = (* f p s *) let c = f p s in fun x -> c (Lazy.force Pos.t) x let pp' f p s = (* f p s *) let c = f p s in lazy (c (Lazy.force Pos.t)) let bot = pp' get_fun_1 path "e_bot" let top = pp' get_fun_1 path "e_top" let cup = pp get_fun_3 path "e_cup" let cap = pp get_fun_3 path "e_cap" let neg = pp get_fun_2 path "e_neg" end (* RelationAlgebra.syntax Coq module *) module Make_Syntax(M: sig val typ: constr Lazy.t end) = struct let path = ra_path@["syntax"] let pack_type = get_fun_2 path "Pack" let pack = get_fun_5 path "pack" let src_ = get_fun_3 path "src_" let tgt_ = get_fun_3 path "tgt_" (* let im_true = get_fun_5 path "im_true" *) (* let im_false = get_fun_5 path "im_false" *) let pp f p s = (* f p s *) let c = f p s in fun x -> c (Lazy.force M.typ) x let expr = pp get_fun_5 path "expr" let zer = pp get_fun_5 path "e_zer" let top = pp get_fun_5 path "e_top" let one = pp get_fun_4 path "e_one" let pls = pp get_fun_7 path "e_pls" let cap = pp get_fun_7 path "e_cap" let neg = pp get_fun_6 path "e_neg" let dot = pp get_fun_8 path "e_dot" let itr = pp get_fun_5 path "e_itr" let str = pp get_fun_5 path "e_str" let cnv = pp get_fun_6 path "e_cnv" let ldv = pp get_fun_8 path "e_ldv" let rdv = pp get_fun_8 path "e_rdv" let var = pp get_fun_4 path "e_var" let eval = get_fun_6 path "packed_eval" end (** recognising various operations, in continuation passing style [is_cup goal l lops k k' (c,ca,n)] calls - [k a b] if "c(ca)" is of the shape "@cup lops a b", and cup is allowed in [l] - [k'] otherwise [n] should be the length of the array [ca] [lops] should be the lattice operations *) let is_cup goal l lops k k' (c,ca,n as x) = if l.has_cup && n >= 2 && convertible' goal (partial_app (n-2) c ca) (Lattice.cup1 lops) then k ca.(n-2) ca.(n-1) else k' x let is_cap goal l lops k k' (c,ca,n as x) = if l.has_cap && n >= 2 && convertible' goal (partial_app (n-2) c ca) (Lattice.cap1 lops) then k ca.(n-2) ca.(n-1) else k' x let is_neg goal l lops k k' (c,ca,n as x) = if l.has_neg && n >= 1 && convertible' goal (partial_app (n-1) c ca) (Lattice.neg1 lops) then k ca.(n-1) else k' x (** for iterations and converse, we need to specify the type [s'] at which they are expected to operate. [mops] should be the monoid operations *) let is_itr goal l mops s' k k' (c,ca,n as x) = if l.has_str && n >= 1 && convertible' goal (partial_app (n-1) c ca) (Monoid.itr2 mops s') then k ca.(n-1) else k' x let is_str goal l mops s' k k' (c,ca,n as x) = if l.has_str && n >= 1 && convertible' goal (partial_app (n-1) c ca) (Monoid.str2 mops s') then k ca.(n-1) else k' x let is_cnv goal l mops s' t' k k' (c,ca,n as x) = if l.has_cnv && n >= 1 && convertible' goal (partial_app (n-1) c ca) (Monoid.cnv3 mops t' s') then k ca.(n-1) else k' x (** this is slightly more complicated with compositions and residuals since the middle type has to be read from the terms: [is_dot goal mops ins s s' k k' (c,ca,n)] calls - [k a (ins m) m b] if "c(ca)" is of the shape "@dot mops _ m _ a b" - [k a s s' b] if "c(ca)" is of the shape "@dot mops s' s' s' a b" - [k'] otherwise the second case is needed for flat (untyped structures), where the types cannot be inferred by unification. the [ins] function is useful to process newly encountered types on-the-fly *) let is_dot goal mops insert_type s s' k k' (c,ca,n as x) = if n >= 5 && convertible' goal (partial_app (n-5) c ca) (Monoid.dot1 mops) then k ca.(n-2) (insert_type ca.(n-4)) ca.(n-4) ca.(n-1) (* the second branch below is there for untyped products *) else if n >= 2 && convertible' goal (partial_app (n-2) c ca) (Monoid.dot4 mops s' s' s') then k ca.(n-2) s s' ca.(n-1) else k' x let is_ldv goal l mops insert_type s s' k k' (c,ca,n as x) = if l.has_div && n >= 5 && convertible' goal (partial_app (n-5) c ca) (Monoid.ldv1 mops) then k ca.(n-2) (insert_type ca.(n-5)) ca.(n-5) ca.(n-1) (* the second branch below is there for untyped products *) else if n >= 2 && convertible' goal (partial_app (n-2) c ca) (Monoid.ldv4 mops s' s' s') then k ca.(n-2) s s' ca.(n-1) else k' x let is_rdv goal l mops insert_type s s' k k' (c,ca,n as x) = if l.has_div && n >= 5 && convertible' goal (partial_app (n-5) c ca) (Monoid.rdv1 mops) then k ca.(n-2) (insert_type ca.(n-5)) ca.(n-5) ca.(n-1) (* the second branch below is there for untyped products *) else if n >= 2 && convertible' goal (partial_app (n-2) c ca) (Monoid.rdv4 mops s' s' s') then k ca.(n-2) s s' ca.(n-1) else k' x relation-algebra-v.1.7.9/src/common.mli000066400000000000000000000251751440504774100200060ustar00rootroot00000000000000val hashtbl_pick : ('a, 'b) Hashtbl.t -> ('a * 'b) option val time : ('a -> 'b) -> 'a -> 'b val error : ('a, unit, string, string, string, 'b) format6 -> 'a val ra_path : string list val tc_find : Environ.env -> Evd.evar_map -> EConstr.types -> Evd.evar_map * EConstr.constr val new_evar : Environ.env -> Evd.evar_map -> EConstr.types -> Evd.evar_map * EConstr.t val push : Names.Name.t Context.binder_annot -> EConstr.types -> Environ.env -> Environ.env val convertible : Environ.env -> Evd.evar_map -> EConstr.constr -> EConstr.constr -> bool val fresh_name : Environ.env -> string -> Names.Id.t Context.binder_annot * EConstr.t val get_const : string list -> string -> EConstr.t lazy_t val force_app : EConstr.t Lazy.t -> EConstr.t array -> EConstr.t val partial_app : int -> EConstr.t -> EConstr.t array -> EConstr.t val get_fun_1 : string list -> string -> EConstr.t -> EConstr.t val get_fun_2 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_3 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_4 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_5 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_6 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_7 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_8 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_9 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_10 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_11 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_12 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_13 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val get_fun_14 : string list -> string -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val ltac_apply : Geninterp.interp_sign -> Ltac_plugin.Tacinterp.value -> EConstr.constr -> unit Proofview.tactic module Coq : sig val path : string list val true_ : EConstr.t lazy_t end module Pos : sig val t : EConstr.t lazy_t val xH : EConstr.t lazy_t val xI : EConstr.t -> EConstr.t val xO : EConstr.t -> EConstr.t val of_int : int -> EConstr.t val path : string list val sigma : EConstr.t -> EConstr.t val sigma_empty : EConstr.t -> EConstr.t val sigma_add : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val sigma_get : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t end module Level : sig val path : string list val t : EConstr.t lazy_t val has_bot : EConstr.t -> EConstr.t val has_top : EConstr.t -> EConstr.t val has_cup : EConstr.t -> EConstr.t val has_cap : EConstr.t -> EConstr.t val has_neg : EConstr.t -> EConstr.t val has_str : EConstr.t -> EConstr.t val has_cnv : EConstr.t -> EConstr.t val has_div : EConstr.t -> EConstr.t end type level = { has_cup : bool; has_bot : bool; has_cap : bool; has_top : bool; has_neg : bool; has_str : bool; has_cnv : bool; has_div : bool; } val read_level : Environ.env -> Evd.evar_map -> EConstr.t -> level val max_level : level module Lattice : sig val path : string list val leq_or_weq : EConstr.t lazy_t val leq1 : EConstr.t -> EConstr.t val leq : EConstr.t lazy_t val weq1 : EConstr.t -> EConstr.t val weq : EConstr.t lazy_t val car : EConstr.t -> EConstr.t val cup1 : EConstr.t -> EConstr.t val cap1 : EConstr.t -> EConstr.t val neg1 : EConstr.t -> EConstr.t val cup : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val cap : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val neg : EConstr.t -> EConstr.t -> EConstr.t val bot : EConstr.t -> EConstr.t val top : EConstr.t -> EConstr.t end module Monoid : sig val path : string list val laws : EConstr.t -> EConstr.t -> EConstr.t val ops : EConstr.t lazy_t val ob : EConstr.t -> EConstr.t val mor0 : EConstr.t lazy_t val mor : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val dot4 : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val dot1 : EConstr.t -> EConstr.t val dot0 : EConstr.t lazy_t val dot : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val one : EConstr.t -> EConstr.t -> EConstr.t val itr2 : EConstr.t -> EConstr.t -> EConstr.t val itr : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val str2 : EConstr.t -> EConstr.t -> EConstr.t val str : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val cnv3 : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val cnv : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val ldv1 : EConstr.t -> EConstr.t val ldv4 : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val ldv : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val rdv1 : EConstr.t -> EConstr.t val rdv4 : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val rdv : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t end module LSyntax : sig val path : string list val pp : ('a -> 'b -> EConstr.t -> 'c -> 'd) -> 'a -> 'b -> 'c -> 'd val pp' : ('a -> 'b -> EConstr.t -> 'c) -> 'a -> 'b -> 'c lazy_t val bot : EConstr.t lazy_t val top : EConstr.t lazy_t val cup : EConstr.t -> EConstr.t -> EConstr.t val cap : EConstr.t -> EConstr.t -> EConstr.t val neg : EConstr.t -> EConstr.t end module Make_Syntax : functor (M : sig val typ : EConstr.constr Lazy.t end) -> sig val path : string list val pack_type : EConstr.t -> EConstr.t -> EConstr.t val pack : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val src_ : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val tgt_ : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val pp : ('a -> 'b -> EConstr.constr -> 'c -> 'd) -> 'a -> 'b -> 'c -> 'd val expr : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val zer : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val top : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val one : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val pls : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val cap : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val neg : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val dot : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val itr : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val str : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val cnv : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val ldv : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val rdv : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val var : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t val eval : EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t -> EConstr.t end val is_cup : Environ.env * Evd.evar_map -> level -> EConstr.t -> (EConstr.t -> EConstr.t -> 'a) -> (EConstr.t * EConstr.t array * int -> 'a) -> EConstr.t * EConstr.t array * int -> 'a val is_cap : Environ.env * Evd.evar_map -> level -> EConstr.t -> (EConstr.t -> EConstr.t -> 'a) -> (EConstr.t * EConstr.t array * int -> 'a) -> EConstr.t * EConstr.t array * int -> 'a val is_neg : Environ.env * Evd.evar_map -> level -> EConstr.t -> (EConstr.t -> 'a) -> (EConstr.t * EConstr.t array * int -> 'a) -> EConstr.t * EConstr.t array * int -> 'a val is_itr : Environ.env * Evd.evar_map -> level -> EConstr.t -> EConstr.t -> (EConstr.t -> 'a) -> (EConstr.t * EConstr.t array * int -> 'a) -> EConstr.t * EConstr.t array * int -> 'a val is_str : Environ.env * Evd.evar_map -> level -> EConstr.t -> EConstr.t -> (EConstr.t -> 'a) -> (EConstr.t * EConstr.t array * int -> 'a) -> EConstr.t * EConstr.t array * int -> 'a val is_cnv : Environ.env * Evd.evar_map -> level -> EConstr.t -> EConstr.t -> EConstr.t -> (EConstr.t -> 'a) -> (EConstr.t * EConstr.t array * int -> 'a) -> EConstr.t * EConstr.t array * int -> 'a val is_dot : Environ.env * Evd.evar_map -> EConstr.t -> (EConstr.t -> 'a) -> 'a -> EConstr.t -> (EConstr.t -> 'a -> EConstr.t -> EConstr.t -> 'b) -> (EConstr.t * EConstr.t array * int -> 'b) -> EConstr.t * EConstr.t array * int -> 'b val is_ldv : Environ.env * Evd.evar_map -> level -> EConstr.t -> (EConstr.t -> 'a) -> 'a -> EConstr.t -> (EConstr.t -> 'a -> EConstr.t -> EConstr.t -> 'b) -> (EConstr.t * EConstr.t array * int -> 'b) -> EConstr.t * EConstr.t array * int -> 'b val is_rdv : Environ.env * Evd.evar_map -> level -> EConstr.t -> (EConstr.t -> 'a) -> 'a -> EConstr.t -> (EConstr.t -> 'a -> EConstr.t -> EConstr.t -> 'b) -> (EConstr.t * EConstr.t array * int -> 'b) -> EConstr.t * EConstr.t array * int -> 'b relation-algebra-v.1.7.9/src/fold.ml000066400000000000000000000155651440504774100172730ustar00rootroot00000000000000(** Definition of the [ra_fold] tactic, used to fold concrete Relation algebra expressions *) (*i camlp4deps: "parsing/grammar.cma" i*) (*i camlp4use: "pa_extend.cmo" i*) open Plugins.Common open Constr open EConstr open Context.Named.Declaration open Proofview let ra_fold_term env sigma ops ob t = let _,tops = Typing.type_of env sigma ops in (* FIXME: leak? *) let rec fill sigma ops tops = if EConstr.eq_constr sigma tops (Lazy.force Monoid.ops) then (sigma,ops) else match kind sigma (Termops.strip_outer_cast sigma tops) with | Prod(_,s,t) -> let sigma,x = new_evar env sigma s in fill sigma (mkApp(ops,[|x|])) t | _ -> error "provided argument is not a monoid operation" in let sigma,ops = fill sigma ops tops in let sigma = ref sigma in let obt = Monoid.ob ops in (* TOTKINK: Use Evarconv.conv ? *) let unifiable sg env x y = try sigma := Unification.w_unify env sg Reduction.CONV x y; true with _ -> false in let is_pls env s' t' = is_cup (env,!sigma) max_level (Monoid.mor ops s' t') in let is_cap env s' t' = is_cap (env,!sigma) max_level (Monoid.mor ops s' t') in let is_neg env s' t' = is_neg (env,!sigma) max_level (Monoid.mor ops s' t') in let is_dot env = is_dot (env,!sigma) ops (fun _ -> ()) () in let is_itr env = is_itr (env,!sigma) max_level ops in let is_str env = is_str (env,!sigma) max_level ops in let is_cnv env = is_cnv (env,!sigma) max_level ops in let is_ldv env = is_ldv (env,!sigma) max_level ops (fun _ -> ()) () in let is_rdv env = is_rdv (env,!sigma) max_level ops (fun _ -> ()) () in (* folding a relation algebra term [e], with domain [s'] and codomain [t'] *) let rec ra_fold env s' t' e = let k' _ = let x = Monoid.one ops s' in if convertible env !sigma e x then x else let x = Lattice.bot (Monoid.mor ops s' t') in if convertible env !sigma e x then x else let x = Lattice.top (Monoid.mor ops s' t') in if convertible env !sigma e x then x else gen_fold env e in match kind !sigma (Termops.strip_outer_cast !sigma e) with App(c,ca) -> (* note that we give priority to dot/one over cap/top (they coincide on flat structures) *) is_dot env s' (fun x () r' y -> Monoid.dot ops s' r' t' (ra_fold env s' r' x) (ra_fold env r' t' y)) ( is_pls env s' t' (fun x y -> Lattice.cup (Monoid.mor ops s' t') (ra_fold env s' t' x) (ra_fold env s' t' y)) ( is_cap env s' t' (fun x y -> Lattice.cap (Monoid.mor ops s' t') (ra_fold env s' t' x) (ra_fold env s' t' y)) ( is_neg env s' t' (fun x -> Lattice.neg (Monoid.mor ops s' t') (ra_fold env s' t' x)) ( is_itr env s' (fun x -> Monoid.itr ops s' (ra_fold env s' s' x)) ( is_str env s' (fun x -> Monoid.str ops s' (ra_fold env s' s' x)) ( is_cnv env s' t' (fun x -> Monoid.cnv ops t' s' (ra_fold env t' s' x)) ( is_ldv env s' (fun x () r' y -> Monoid.ldv ops r' s' t' (ra_fold env r' s' x) (ra_fold env r' t' y)) ( is_rdv env s' (fun x () r' y -> Monoid.rdv ops r' t' s' (ra_fold env t' r' x) (ra_fold env s' r' y)) ( k'))))))))) (c,ca,Array.length ca) | _ -> k' () and gen_fold env e = match kind !sigma (Termops.strip_outer_cast !sigma e) with | App(c,ca) -> mkApp(c,Array.map (fold env) ca) | Prod(x,e,f) -> mkProd(x, fold env e, fold (push x e env) f) | Lambda(x,t,e) -> mkLambda(x, t, fold (push x t env) e) | LetIn(x,e,t,f) -> mkLetIn(x, fold env e, t, fold (push x t env )f) | Case(ci,u,pms,t,iv,e,f) -> let map i (nas, c as br) = let ctx = expand_branch env !sigma u pms (ci.ci_ind, i + 1) br in (nas, fold (push_rel_context ctx env) c) in mkCase(ci, u, pms, t, iv, fold env e, Array.mapi map f) | _ -> e and fold env e = let _,t = Typing.type_of env !sigma e in match ob with | Some o when convertible env !sigma t (Lattice.car (Monoid.mor ops o o)) -> ra_fold env o o e | Some o when EConstr.eq_constr !sigma t mkProp -> (match kind !sigma (Termops.strip_outer_cast !sigma e) with | App(c,ca) when 2 <= Array.length ca -> let n = Array.length ca in let rel = (partial_app (n-2) c ca) in let lops = Monoid.mor ops o o in let leq = Lattice.leq1 lops in let weq = Lattice.weq1 lops in if unifiable !sigma env rel weq then mkApp(weq,[|ra_fold env o o ca.(n-2);ra_fold env o o ca.(n-1)|]) else if unifiable !sigma env rel leq then mkApp(leq,[|ra_fold env o o ca.(n-2);ra_fold env o o ca.(n-1)|]) else gen_fold env e | _ -> gen_fold env e) | _ when EConstr.eq_constr !sigma t mkProp -> (match kind !sigma (Termops.strip_outer_cast !sigma e) with | App(c,ca) when 2 <= Array.length ca -> let n = Array.length ca in let rel = (partial_app (n-2) c ca) in let sg,s = new_evar env !sigma obt in let sg,t = new_evar env sg obt in let lops = Monoid.mor ops s t in let leq = Lattice.leq1 lops in let weq = Lattice.weq1 lops in if unifiable sg env rel weq then mkApp(weq,[|ra_fold env s t ca.(n-2);ra_fold env s t ca.(n-1)|]) else if unifiable sg env rel leq then mkApp(leq,[|ra_fold env s t ca.(n-2);ra_fold env s t ca.(n-1)|]) else gen_fold env e | _ -> gen_fold env e) | _ -> let sg,s' = new_evar env !sigma obt in let sg,t' = new_evar env sg obt in if unifiable sg env t (Lattice.car (Monoid.mor ops s' t')) then ra_fold env s' t' e else gen_fold env e in let t = fold env t in t,!sigma let ra_fold_concl ops ob = Goal.enter (fun goal -> let env = Tacmach.pf_env goal in let f,sigma = ra_fold_term env (Tacmach.project goal) ops ob (Tacmach.pf_concl goal) in Proofview.tclORELSE (tclTHEN (Unsafe.tclEVARS sigma) (Tactics.convert_concl ~cast:false ~check:true f DEFAULTcast)) (fun (e, info) -> Feedback.msg_warning (Printer.pr_leconstr_env env sigma f); tclZERO ~info e)) let ra_fold_hyp ops ob hyp = Proofview.Goal.enter begin fun gl -> let env = Tacmach.pf_env gl in let sigma = Tacmach.project gl in let decl = Tacmach.pf_get_hyp hyp gl in let id,ddef,dtyp = to_tuple decl in let decl,sigma = match ddef with | Some def -> (* try to fold both the body and the type of local definitions *) let def,sg = ra_fold_term env sigma ops ob def in let typ,sigma = ra_fold_term env sg ops ob dtyp in LocalDef(id,def,typ),sigma | None -> (* only fold the type of local assumptions *) let typ,sigma = ra_fold_term env sigma ops ob dtyp in LocalAssum(id,typ),sigma in tclTHEN (Unsafe.tclEVARS sigma) (Tactics.convert_hyp ~check:true ~reorder:true decl) end let ra_fold_hyps ops ob = List.fold_left (fun acc hyp -> tclTHEN (ra_fold_hyp ops ob hyp) acc) (tclUNIT()) let ra_fold_all ops ob = Goal.enter (fun goal -> let hyps = Goal.hyps goal in List.fold_left (fun acc hyp -> tclTHEN (ra_fold_hyp ops ob (get_id hyp)) acc) (ra_fold_concl ops ob) hyps) relation-algebra-v.1.7.9/src/fold.mli000066400000000000000000000004061440504774100174300ustar00rootroot00000000000000val ra_fold_concl : EConstr.constr -> EConstr.t option -> unit Proofview.tactic val ra_fold_hyps : EConstr.constr -> EConstr.t option -> Names.Id.t list -> unit Proofview.tactic val ra_fold_all : EConstr.constr -> EConstr.t option -> unit Proofview.tactic relation-algebra-v.1.7.9/src/fold_g.mlg000066400000000000000000000011751440504774100177400ustar00rootroot00000000000000DECLARE PLUGIN "coq-relation-algebra.fold" { open Ltac_plugin open Stdarg open Fold } (* tactic grammar entries *) TACTIC EXTEND ra_fold | [ "ra_fold" constr(ops) ] -> { ra_fold_concl ops None } | [ "ra_fold" constr(ops) constr(ob)] -> { ra_fold_concl ops (Some ob) } | [ "ra_fold" constr(ops) "in" var_list(l)] -> { ra_fold_hyps ops None l } | [ "ra_fold" constr(ops) constr(ob) "in" var_list(l)] -> { ra_fold_hyps ops (Some ob) l } END TACTIC EXTEND ra_fold_in_star | [ "ra_fold" constr(ops) "in" "*"] -> { ra_fold_all ops None } | [ "ra_fold" constr(ops) constr(ob) "in" "*"] -> { ra_fold_all ops (Some ob) } END relation-algebra-v.1.7.9/src/fold_g.mli000066400000000000000000000000001440504774100177240ustar00rootroot00000000000000relation-algebra-v.1.7.9/src/kat_dec.ml000066400000000000000000000115511440504774100177300ustar00rootroot00000000000000(** * A simple algorithm for deciding KAT (in)equivalence (providing counter-examples in case of failure) Computing counter-examples in OCaml has three advantages: - it's faster so that failures are detected earlier; - the Coq algorithm don't need to compute them; - the proof of correctness of the Coq algorithm is not polluted by the corresponding additional computations. There are two inconvenients: - the code is duplicated, and the OCaml one could be wrong; - in case of success we do the computations twice (in OCaml, then in Coq). *) (** ** KAT expressions *) type var = int (* propositional variables *) type rel = int (* relational variables *) (* predicates *) type pred = | P_bot | P_top | P_cup of pred * pred | P_cap of pred * pred | P_neg of pred | P_var of var let p_bot = P_bot let p_top = P_top let p_cup x y = P_cup(x,y) let p_cap x y = P_cap(x,y) let p_neg x = P_neg x let p_var x = P_var x (* propositional atoms *) type atom = (var*bool) list (* KAT expresions *) type gregex = | G_rel of rel | G_prd of pred | G_pls of gregex * gregex | G_dot of gregex * gregex | G_itr of gregex let g_rel x = G_rel x let g_prd x = G_prd x let g_zer = G_prd P_bot let g_one = G_prd P_top let g_pls x y = G_pls(x,y) let g_dot x y = G_dot(x,y) let g_itr e = G_itr e let g_str e = G_pls (g_one,(G_itr e)) (** ** very basic algorithm for deciding KAT (in)equations, through partial derivatives *) (** (actually extracted from the formalised one in Coq, and then reworked manually to include counter-example generation) *) (* sorted list membership *) let rec mem x = function | [] -> false | y::q -> match compare x y with | 1 -> mem x q | 0 -> true | _ -> false (* sorted list insertion *) let rec insert x = function | [] -> [x] | y::q as l -> match compare x y with | 1 -> y::insert x q | 0 -> l | _ -> x::l (* sorted union of sorted lists *) let rec union h k = match h,k with | l,[] | [],l -> l | x::h', y::k' -> match compare x y with | 1 -> y::union h k' | 0 -> y::union h' k' | _ -> x::union h' k let rec epsilon_pred a = function | P_bot -> false | P_top -> true | P_cup (e,f) -> epsilon_pred a e || epsilon_pred a f | P_cap (e,f) -> epsilon_pred a e && epsilon_pred a f | P_neg e -> not (epsilon_pred a e) | P_var i -> mem i a let rec epsilon a = function | G_rel i -> false | G_prd p -> epsilon_pred a p | G_pls (e,f) -> epsilon a e || epsilon a f | G_dot (e,f) -> epsilon a e && epsilon a f | G_itr e -> epsilon a e let rec pderiv a i = function | G_rel j -> if i=j then [g_one] else [] | G_prd p -> [] | G_pls (e,f) -> union (pderiv a i e) (pderiv a i f) | G_dot (e,f) -> let l = List.map (fun e' -> G_dot(e',f)) (pderiv a i e) in if epsilon a e then union l (pderiv a i f) else l | G_itr e as ei -> let es = G_pls(g_one,ei) in List.map (fun e' -> G_dot (e',es)) (pderiv a i e) let epsilon' a = List.exists (epsilon a) let pderiv' a i l = List.fold_right (fun e -> union (pderiv a i e)) l [] let rec vars = function | G_rel i -> [i] | G_prd p -> [] | G_pls (e,f) | G_dot(e,f)-> union (vars e) (vars f) | G_itr e -> vars e let rec vars_pred = function | P_bot | P_top -> [] | P_cup (e,f) | P_cap (e,f) -> union (vars_pred e) (vars_pred f) | P_neg e -> vars_pred e | P_var i -> [i] let rec pvars = function | G_rel i -> [] | G_prd p -> vars_pred p | G_pls (e,f) | G_dot(e,f)-> union (pvars e) (pvars f) | G_itr e -> pvars e let obind f = function | `Some x -> f x | `Err a -> `Err a let rec ofold f l y = match l with | [] -> `Some y | x::q -> obind (f x) (ofold f q y) let loop_aux vars w e f a todo = if epsilon' a e = epsilon' a f then `Some (List.fold_right (fun i x -> ((a,i)::w, pderiv' a i e, pderiv' a i f)::x) vars todo) else `Err a let rec loop vars atoms rel = function | [] -> None | (w,e,f)::todo -> if mem (e,f) rel then loop vars atoms rel todo else match ofold (loop_aux vars w e f) atoms todo with | `Some todo -> loop vars atoms (insert (e,f) rel) todo | `Err a -> Some (List.rev w,a) let rec atoms = function | [] -> [[]] | x::q -> let f = atoms q in f @ List.map (fun q -> x::q) f let rec ext w pvars = match w,pvars with | [],_ -> List.map (fun x -> (x,false)) pvars | _,[] -> failwith "ext: w not in pvars" | x::w',y::pvars' -> match compare x y with | 1 -> (y,false)::ext w pvars' | 0 -> (x,true)::ext w' pvars' | _ -> failwith "ext: w not in pvars" let kat_weq' pvars vars e f = let atoms = atoms pvars in match loop vars atoms [] [([],e,f)] with | Some (w,a) -> Some (List.map (fun (a,i) -> (ext a pvars,i)) w, ext a pvars) | None -> None let kat_weq e f = kat_weq' (pvars (G_pls(e,f))) (vars (G_pls(e,f))) [e] [f] let kat_leq e f = kat_weq' (pvars (G_pls(e,f))) (vars (G_pls(e,f))) (union [e] [f]) [f] relation-algebra-v.1.7.9/src/kat_dec.mli000066400000000000000000000013141440504774100200750ustar00rootroot00000000000000type var = int (* propositional variables *) type rel = int (* relational variables *) type atom = (var*bool) list (* propositional atoms *) type pred (* predicates *) type gregex (* KAT predessions *) val p_bot: pred val p_top: pred val p_var: var -> pred val p_neg: pred -> pred val p_cup: pred -> pred -> pred val p_cap: pred -> pred -> pred val g_zer: gregex val g_one: gregex val g_rel: rel -> gregex val g_prd: pred -> gregex val g_itr: gregex -> gregex val g_str: gregex -> gregex val g_pls: gregex -> gregex -> gregex val g_dot: gregex -> gregex -> gregex val kat_weq: gregex -> gregex -> ((atom * rel) list * atom) option val kat_leq: gregex -> gregex -> ((atom * rel) list * atom) option relation-algebra-v.1.7.9/src/kat_reification.ml000066400000000000000000000412001440504774100214630ustar00rootroot00000000000000(*i camlp4deps: "parsing/grammar.cma" i*) (*i camlp4use: "pa_extend.cmo" i*) (** KAT reification plugin: - a tactic to perform KAT reification, possibly checking that the corresponding equation holds, and displaying a counter-example in case it doesn't - a tactic to compute the universal KAT expression corresponding to a given goal, to ease elimination of Hoare hypotheses in the [hkat] tactic. *) open Plugins.Common open Constr open EConstr (* RelationAlgebra.kat Coq module *) module KAT = struct let path = ra_path@["kat"] let kar = get_const path "kar" let tst = get_fun_2 path "tst" let inj2 = get_fun_2 path "inj" let inj = get_fun_3 path "inj" end (* RelationAlgebra.kat_reification module *) module Pack = struct let path = ra_path@["kat_reification"] let var = get_const path "var" let expr = get_fun_5 path "kat_expr" let eval = get_fun_7 path "eval" let v_get = get_fun_3 path "v_get" let v_add = get_fun_5 path "v_add" let v_L = get_fun_2 path "v_L" let inj = get_fun_5 path "e_inj" let e_var = get_fun_4 path "e_var" let p_var = get_fun_1 path "p_var" let s' = get_fun_3 path "s'" let t' = get_fun_3 path "t'" end module Syntax = Make_Syntax(struct let typ = Pack.var end) (* association tables *) module Tbl : sig type 'a t (* create an empty table *) val create: unit -> 'a t (* [insert gl t x y z] adds the association [x->(y,z)] to [t] and returns the corresponding (Ocaml) index ; [z] is an arbitrary information, to store with [x] [gl] is the current goal, used to compare terms *) val insert: Environ.env -> Evd.evar_map -> 'a t -> constr -> constr -> 'a -> int (* [to_env t typ def] returns (coq) environment corresponding to [t], yielding elements of type [typ], with [def] as default value *) val to_env: 'a t -> constr -> constr -> constr (* [get t i] returns the (coq) value of the i-th element, together with the associated information *) val get: 'a t -> int -> constr*'a end = struct type 'a t = ((constr*constr*'a) list) ref let create () = ref [] let rec find env sigma x = function | [] -> raise Not_found | (x',_,_)::q -> if convertible env sigma x x' then 1+List.length q else find env sigma x q let get t i = let l = !t in let (x,_,z) = List.nth l (List.length l-i) in (x,z) let insert env sigma t x y z = let l = !t in try find env sigma x l with Not_found -> t := (x,y,z)::l; 1+List.length l let to_env t typ def = match !t with | [] -> mkLambda (Context.anonR,Lazy.force Pos.t,def) | [_,x,_] -> mkLambda (Context.anonR,Lazy.force Pos.t,x) | (_,x,_)::q -> Pos.sigma_get typ x (snd (List.fold_left (fun (i,acc) (_,x,_) -> (i-1, Pos.sigma_add typ (Pos.of_int i) x acc) ) (List.length q,Pos.sigma_empty typ) q )) end (* AST for KAT expressions *) module AST = struct type idx = int type t = | Bot | Top | Tst of int | Neg of t | Cap of t*t | Cup of t*t type e = | Zer of idx*idx | One of idx | Var of int | Prd of idx*t | Pls of idx*idx*e*e | Dot of idx*idx*idx*e*e | Itr of idx*e | Str of idx*e (* constructing reified Coq terms out of the AST *) let rec tread = function | Bot -> Lazy.force LSyntax.bot | Top -> Lazy.force LSyntax.top | Tst(i) -> Pack.p_var (Pos.of_int i) | Neg(e) -> LSyntax.neg (tread e) | Cap(e,f) -> LSyntax.cap (tread e) (tread f) | Cup(e,f) -> LSyntax.cup (tread e) (tread f) let read kops tenv_ref env_ref src_ tgt_ = let rec read = function | Zer(s,t) -> Syntax.zer src_ tgt_ (Pos.of_int s) (Pos.of_int t) | One(s) -> Syntax.one src_ tgt_ (Pos.of_int s) | Var(i) -> Pack.e_var kops tenv_ref env_ref (Pos.of_int i) | Prd(s,e) -> Pack.inj kops tenv_ref env_ref (Pos.of_int s) (tread e) | Pls(s,t,e,f) -> Syntax.pls src_ tgt_ (Pos.of_int s) (Pos.of_int t) (read e) (read f) | Dot(r,s,t,e,f) -> Syntax.dot src_ tgt_ (Pos.of_int r) (Pos.of_int s) (Pos.of_int t) (read e) (read f) | Itr(s,e) -> Syntax.itr src_ tgt_ (Pos.of_int s) (read e) | Str(s,e) -> Syntax.str src_ tgt_ (Pos.of_int s) (read e) in read (* OCaml decision procedure *) open Kat_dec let rec topt = function | Bot -> p_bot | Top -> p_top | Tst(i) -> p_var i | Neg(e) -> p_neg (topt e) | Cap(e,f) -> p_cap (topt e) (topt f) | Cup(e,f) -> p_cup (topt e) (topt f) let rec opt = function | Zer(_,_) -> g_zer | One(_) -> g_one | Var(i) -> g_rel i | Prd(_,e) -> g_prd (topt e) | Pls(_,_,e,f) -> g_pls (opt e) (opt f) | Dot(_,_,_,e,f) -> g_dot (opt e) (opt f) | Itr(_,e) -> g_itr (opt e) | Str(_,e) -> g_str (opt e) let equiv e f = kat_weq (opt e) (opt f) (* parsing back witnesses in case of failure *) let parse_trace kops mops lops env penv (_,p') (_,q' as qq) (w,a) = let parse_literal (n,n') (i,b) = try let i,_ = Tbl.get (fst (Hashtbl.find penv n)) i in `V (if b then i else Lattice.neg (lops n') i) with Not_found -> `One n' (* not needed once we have typed counter-examples *) in let rec parse_witness (_,n' as nn) = function | [] -> `One n' | [x] -> parse_literal nn x | x::q -> match parse_literal nn x with | `One _ -> parse_witness nn q | x -> `Cap (n', parse_witness nn q, x) in let inj n = function | `One m -> `One m | x -> `Inj (n,x) in let dot n m p x y = match x,y with | `One _,z | z,`One _ -> z | _ -> `Dot(n,m,p,x,y) in let ddot x y i = let i,((_,n' as nn),(_,m')) = Tbl.get env i in dot p' n' m' (dot p' n' n' x (inj n' (parse_witness nn y))) (`V i) in let rec parse x = function | [] -> x | (y,i)::q -> parse (ddot x y i) q in let rec to_constr = function | `V i -> i | `One n -> Monoid.one mops n | `Inj(n,e) -> KAT.inj kops n (to_constr e) | `Cap(n,e,f) -> Lattice.cap (lops n) (to_constr e) (to_constr f) | `Dot(n,m,p,e,f) -> Monoid.dot mops n m p (to_constr e) (to_constr f) in let t = dot p' q' q' (parse (`One p') w) (inj q' (parse_witness qq a)) in to_constr t end (** KAT reification tactic - [kat] indicates whether failure messages should mention KA or KAT (since the tactic fo KA is built on top of that for KAT) - [check] indicates whether we should run the OCaml algorithm first, and display a counter-example in case of failure. like for ra_reification, this tactic simply converts the goal into a sequence of "let ... in", so that we can later get all reification ingredients from Ltac, just by doing "intros ..." *) let reify_kat_goal ?kat check = Proofview.Goal.enter begin fun goal -> let env0 = Tacmach.pf_env goal in let sigma = Tacmach.project goal in let concl = Tacmach.pf_concl goal in let msg = match kat with | Some b when EConstr.eq_constr sigma b (Lazy.force Coq.true_) -> "KAT" | _ -> "KA" in (* variables for referring to the environments *) let tenv_n,tenv_ref = fresh_name env0 "tenv" in let env_n,env_ref = fresh_name env0 "env" in let penv_n,penv_ref = fresh_name env0 "penv" in (* table associating indices to encountered types *) let tenv = Tbl.create() in let insert_type t = Tbl.insert env0 sigma tenv t t () in (* table associating indices to encountered atoms *) let env = Tbl.create() in let insert_atom mops x (s,_ as ss) (t,_ as tt) = Tbl.insert env0 sigma env x (Syntax.pack mops tenv_ref (Pos.of_int s) (Pos.of_int t) x) (ss,tt) in (* table associating tables for predicates, for each type *) let penv = Hashtbl.create 7 in let insert_pred x s s' = let t = try fst (Hashtbl.find penv s) with Not_found -> let t = Tbl.create() in Hashtbl.add penv s (t,s'); t in Tbl.insert env0 sigma t x x () in (* get the (in)equation *) let rel,ca = match kind sigma (Termops.strip_outer_cast sigma concl) with | App(c,ca) -> if EConstr.eq_constr sigma c (Lazy.force Lattice.weq) then mkApp (c,[|ca.(0)|]), ca else if EConstr.eq_constr sigma c (Lazy.force Lattice.leq) then mkApp (c,[|ca.(0)|]), ca else error "unrecognised goal" | _ -> error "unrecognised goal" in (* get the monoid operations and the domain/codomain types *) let mops,src',tgt' = match kind sigma (Termops.strip_outer_cast sigma ca.(0)) with | App(c,ca) when EConstr.eq_constr sigma c (Lazy.force Monoid.mor0) -> ca.(0),ca.(1),ca.(2) | _ -> error "could not find monoid operations" in (* get the kat operations *) let kops = match kind sigma (Termops.strip_outer_cast sigma mops) with | App(c,ca) when EConstr.eq_constr sigma c (Lazy.force KAT.kar) -> ca.(0) | _ -> error "could not find KAT operations" in let lops = KAT.tst kops in let src = insert_type src' in let tgt = insert_type tgt' in let pck = Syntax.pack_type mops tenv_ref in (* type of packed elements *) let typ = Monoid.ob mops in (* type of types *) let src_v,(src_n,src_) = Pack.s' kops tenv_ref env_ref, fresh_name env0 "src" in let tgt_v,(tgt_n,tgt_) = Pack.t' kops tenv_ref env_ref, fresh_name env0 "tgt" in let es = env0, sigma in let is_pls s' t' = is_cup es max_level (Monoid.mor mops s' t') in let is_dot = is_dot es mops insert_type in let is_itr = is_itr es max_level mops in let is_str = is_str es max_level mops in let is_cup s' = is_cup es max_level (lops s') in let is_cap s' = is_cap es max_level (lops s') in let is_neg s' = is_neg es max_level (lops s') in let is_inj s' k k' (c,ca,n as x) = if n >= 1 && convertible env0 sigma (partial_app (n-1) c ca) (KAT.inj2 kops s') then k ca.(n-1) else k' x in (* reification of a lattice term [e], with domain [s] (s: index of the domain, s': actual domain) *) let rec lreify (s,s' as ss) e = let k' _ = if convertible env0 sigma e (Lattice.top (lops s')) then AST.Top else if convertible env0 sigma e (Lattice.bot (lops s')) then AST.Bot else AST.Tst (insert_pred e s s') in match kind sigma (Termops.strip_outer_cast sigma e) with App(c,ca) -> is_cup s' (fun x y -> AST.Cup (lreify ss x, lreify ss y)) ( is_cap s' (fun x y -> AST.Cap (lreify ss x, lreify ss y)) ( is_neg s' (fun x -> AST.Neg (lreify ss x)) ( k'))) (c,ca,Array.length ca) | _ -> k' () in (* reification of a term [e], with domain [s] and codomain [t] (s: index of the domain, s': actual domain -- same for t) *) let rec reify (s,s' as ss) (t,t' as tt) e = let k' _ = if convertible env0 sigma e (Monoid.one mops s') then AST.One(s) else if convertible env0 sigma e (Lattice.bot (Monoid.mor mops s' t')) then AST.Zer(s,t) else AST.Var (insert_atom mops e ss tt) in match kind sigma (Termops.strip_outer_cast sigma e) with App(c,ca) -> is_dot s s' (fun x r r' y -> AST.Dot (s, r, t, reify ss (r,r') x, reify (r,r') tt y)) ( is_pls s' t' (fun x y -> AST.Pls(s, t, reify ss tt x, reify ss tt y)) ( is_itr s' (fun x -> AST.Itr(s, reify ss ss x)) ( is_str s' (fun x -> AST.Str(s, reify ss ss x)) ( is_inj s' (fun x -> AST.Prd(s, lreify ss x)) ( k'))))) (c,ca,Array.length ca) | _ -> k' () in (* reification of left and right members *) let lhs_v,(lhs_n,lhs) = reify (src,src') (tgt,tgt') ca.(1), fresh_name env0 "lhs" in let rhs_v,(rhs_n,rhs) = reify (src,src') (tgt,tgt') ca.(2), fresh_name env0 "rhs" in (* checking the equivalence in OCaml, and displaying potential counter-examples *) (match if check then AST.equiv lhs_v rhs_v else None with Some t -> let t = AST.parse_trace kops mops lops env penv (src,src') (tgt,tgt') t in Tacticals.tclFAIL (Pp.(++) (Pp.str (" not a "^msg^" theorem:\n")) (Printer.pr_leconstr_env (fst es) (snd es) t)) | None -> (* turning the ast in to coq constr *) let lhs_v = AST.read kops tenv_ref env_ref src_ tgt_ lhs_v in let rhs_v = AST.read kops tenv_ref env_ref src_ tgt_ rhs_v in let src = Pos.of_int src in let tgt = Pos.of_int tgt in (* apply "eval" around the reified terms *) let lhs = Pack.eval kops tenv_ref env_ref penv_ref src tgt lhs in let rhs = Pack.eval kops tenv_ref env_ref penv_ref src tgt rhs in let x = Pack.expr kops tenv_ref env_ref src tgt in (* construction of coq' types index *) let tenv = Tbl.to_env tenv typ src' in (* construction of coq' reification environment for atoms *) let env = let def = let one = Monoid.one mops src' in Syntax.pack mops tenv_ref src src one in Tbl.to_env env pck def in (* construction of coq' reification environment for predicates *) let penv = Pack.v_get kops tenv_ref (Hashtbl.fold (fun s (t,s') acc -> Pack.v_add kops tenv_ref acc (Pos.of_int s) (Tbl.to_env t (Lattice.car (lops s')) (Lattice.top (lops s'))) ) penv (Pack.v_L kops tenv_ref)) in (* reified goal conclusion: add the relation over the two evaluated members *) let reified = mkNamedLetIn sigma tenv_n tenv (mkArrowR (Lazy.force Pos.t) typ) ( mkNamedLetIn sigma env_n env (mkArrowR (Lazy.force Pos.t) pck) ( mkNamedLetIn sigma penv_n penv (mkProd (Context.anonR,Lazy.force Pos.t, mkArrowR (Lazy.force Pos.t) (Lattice.car (lops (mkApp (tenv_ref,[|mkRel 2|])))))) ( mkNamedLetIn sigma src_n src_v (mkArrowR (Lazy.force Pack.var) (Lazy.force Pos.t)) ( mkNamedLetIn sigma tgt_n tgt_v (mkArrowR (Lazy.force Pack.var) (Lazy.force Pos.t)) ( mkNamedLetIn sigma lhs_n lhs_v x ( mkNamedLetIn sigma rhs_n rhs_v x ( (mkApp (rel, [|lhs;rhs|]))))))))) in Proofview.tclORELSE (Tactics.convert_concl ~cast:false ~check:true reified DEFAULTcast) (fun (e, info) -> Feedback.msg_warning (Printer.pr_leconstr_env (fst es) (snd es) reified); Proofview.tclZERO ~info e)) end (** tactic to precompute the alphabet and the universal expression, for Hoare hypotheses elimination ([hkat]) *) let get_kat_alphabet = Proofview.Goal.enter begin fun goal -> let env0 = Tacmach.pf_env goal in let sigma = Tacmach.project goal in let concl = Tacmach.pf_concl goal in let rec insert x = function | [] -> [x] | x'::q as l -> if convertible env0 sigma x x' then l else x'::insert x q in (* get the (in)equation *) let ca = match kind sigma (Termops.strip_outer_cast sigma concl) with | App(c,ca) -> if EConstr.eq_constr sigma c (Lazy.force Lattice.weq) then ca else if EConstr.eq_constr sigma c (Lazy.force Lattice.leq) then ca else error "unrecognised goal" | _ -> error "unrecognised goal" in (* get the monoid operations and the domain/codomain types *) let mops,src',tgt' = match kind sigma (Termops.strip_outer_cast sigma ca.(0)) with | App(c,ca) when EConstr.eq_constr sigma c (Lazy.force Monoid.mor0) -> ca.(0),ca.(1),ca.(2) | _ -> error "could not find monoid operations" in (* get the kat operations *) let kops = match kind sigma (Termops.strip_outer_cast sigma mops) with | App(c,ca) when EConstr.eq_constr sigma c (Lazy.force KAT.kar) -> ca.(0) | _ -> error "could not find KAT operations" in let es = (env0, sigma) in let is_pls s' t' = is_cup es max_level (Monoid.mor mops s' t') in let is_dot = is_dot es mops ignore () in let is_itr = is_itr es max_level mops in let is_str = is_str es max_level mops in let is_inj s' k k' (c,ca,n as x) = if n >= 1 && convertible env0 sigma (partial_app (n-1) c ca) (KAT.inj2 kops s') then k ca.(n-1) else k' x in (* alphabet of a term [e], with domain [s] and codomain [t] (s: index of the domain, s': actual domain -- same for t) *) let rec alphabet acc s' t' e = let k' _ = if convertible env0 sigma e (Monoid.one mops s') then acc else if convertible env0 sigma e (Lattice.bot (Monoid.mor mops s' t')) then acc else insert e acc in match kind sigma (Termops.strip_outer_cast sigma e) with App(c,ca) -> is_dot s' (fun x () r' y -> alphabet (alphabet acc s' r' x) r' t' y) ( is_pls s' t' (fun x y -> alphabet (alphabet acc s' t' x) s' t' y) ( is_itr s' (alphabet acc s' s') ( is_str s' (alphabet acc s' s') ( is_inj s' (fun x -> acc) ( k'))))) (c,ca,Array.length ca) | _ -> k' () in (* getting the letters from the left and right members *) let alph = alphabet (alphabet [] src' tgt' ca.(2)) src' tgt' ca.(1) in let (alph_n,_) = fresh_name env0 "u" in let alph_v = List.fold_left (Lattice.cup (Monoid.mor mops src' tgt')) (Lattice.bot (Monoid.mor mops src' tgt')) alph in (* add the alphabet with a let-in *) let reified = mkNamedLetIn sigma alph_n alph_v (Lattice.car (Monoid.mor mops src' tgt')) concl in Tactics.convert_concl ~cast:false ~check:true reified DEFAULTcast end relation-algebra-v.1.7.9/src/kat_reification.mli000066400000000000000000000001621440504774100216360ustar00rootroot00000000000000val reify_kat_goal : ?kat:EConstr.t -> bool -> unit Proofview.tactic val get_kat_alphabet : unit Proofview.tactic relation-algebra-v.1.7.9/src/kat_reification_g.mlg000066400000000000000000000006731440504774100221510ustar00rootroot00000000000000DECLARE PLUGIN "coq-relation-algebra.kat" { open Stdarg open Ltac_plugin open Kat_reification } (* tactic grammar entries *) TACTIC EXTEND ra_kat_reify_nocheck | [ "ra_kat_reify_nocheck" constr(kat) ] -> { reify_kat_goal ~kat false } END TACTIC EXTEND ra_kat_reify_check | [ "ra_kat_reify" constr(kat) ] -> { reify_kat_goal ~kat true } END TACTIC EXTEND ra_get_kat_alphabet | [ "ra_get_kat_alphabet" ] -> { get_kat_alphabet } END relation-algebra-v.1.7.9/src/kat_reification_g.mli000066400000000000000000000000001440504774100221330ustar00rootroot00000000000000relation-algebra-v.1.7.9/src/mrewrite.ml000066400000000000000000000067411440504774100202010ustar00rootroot00000000000000(*i camlp4deps: "parsing/grammar.cma" i*) (*i camlp4use: "pa_extend.cmo" i*) (** Simple helper to define basic rewriting modulo A (associativity) tactics: (see the comments in [rewriting.v]) *) open Plugins.Common open Constr open EConstr module Ext = struct let path = ra_path@["rewriting"] let leq_2 = get_fun_10 path "ext_leq_2" let leq_3 = get_fun_12 path "ext_leq_3" let leq_4 = get_fun_14 path "ext_leq_4" let weq_2 = get_fun_10 path "ext_weq_2" let weq_3 = get_fun_12 path "ext_weq_3" let weq_4 = get_fun_14 path "ext_weq_4" let leq_2' = get_fun_10 path "ext_leq_2'" let leq_3' = get_fun_12 path "ext_leq_3'" let leq_4' = get_fun_14 path "ext_leq_4'" let weq_2' = get_fun_10 path "ext_weq_2'" let weq_3' = get_fun_12 path "ext_weq_3'" let weq_4' = get_fun_14 path "ext_weq_4'" end let rec length sigma t = match kind sigma (Termops.strip_outer_cast sigma t) with | Prod(_,_,t) -> 1+length sigma t | _ -> 0 let extend ist k dir h = Proofview.Goal.enter (fun goal -> let fst,snd = match dir with `LR -> 2,1 | `RL -> 1,2 in let ext_2 rel = match dir,rel with | `LR,`Weq -> Ext.weq_2 | `RL,`Weq -> Ext.weq_2' | `LR,`Leq -> Ext.leq_2 | `RL,`Leq -> Ext.leq_2' in let ext_3 rel = match dir,rel with | `LR,`Weq -> Ext.weq_3 | `RL,`Weq -> Ext.weq_3' | `LR,`Leq -> Ext.leq_3 | `RL,`Leq -> Ext.leq_3' in let ext_4 rel = match dir,rel with | `LR,`Weq -> Ext.weq_4 | `RL,`Weq -> Ext.weq_4' | `LR,`Leq -> Ext.leq_4 | `RL,`Leq -> Ext.leq_4' in let sigma = ref (Tacmach.project goal) in let rec dots env t = match kind !sigma (Termops.strip_outer_cast !sigma t) with | App(c,ca) when EConstr.eq_constr !sigma c (Lazy.force Monoid.dot0) -> (match dots env ca.(4) with | None -> let ops = ca.(0) in let sg,l = new_evar env !sigma (Lazy.force Level.t) in let sg,laws = try Typeclasses.resolve_one_typeclass env sg (Monoid.laws l ops) with Not_found -> error "could not find monoid laws" in let l = Evarutil.nf_evar sg l in sigma := sg; Some(l,ops,laws,ca.(3), [ca.(1),ca.(4); ca.(2),ca.(5)]) | Some(l,ops,laws,r,q) -> Some(l,ops,laws,ca.(3), q@[ca.(2),ca.(5)])) | _ -> None in let rec ext env i h t = match kind !sigma (Termops.strip_outer_cast !sigma t) with | App(c,ca) -> (match if EConstr.eq_constr !sigma c (Lazy.force Lattice.weq) then Some `Weq else if EConstr.eq_constr !sigma c (Lazy.force Lattice.leq) then Some `Leq else None with | None -> error "the provided term does not end with a relation algebra (in)equation" | Some rel -> let n = Array.length ca in match dots env (ca.(n-fst)), ca.(n-snd) with | Some(l,ops,laws,p,[n,x;m,y]),v -> ext_2 rel l ops laws n m p x y v h | Some(l,ops,laws,q,[n,x;m,y;p,z]),v -> ext_3 rel l ops laws n m p q x y z v h | Some(l,ops,laws,r,[n,x;m,y;p,z;q,t]),v -> ext_4 rel l ops laws n m p q r x y z t v h | Some(_,_,_,_,_),_ -> error "pattern to large, please submit a feature wish" | None,_ -> h (* no need for modulo A rewriting *) ) | Prod(x,s,t) -> mkLambda(x,s,ext (push x s env) (i-1) (mkApp(h,[|mkRel i|])) t) | _ -> error "the provided term does not end with a relation algebra (in)equation" in let _,t = Tacmach.pf_type_of goal h in let h = ext (Proofview.Goal.env goal) (length !sigma t) h t in (* Tacticals.tclTHEN (Proofview.Unsafe.tclEVARS !sigma) *) (ltac_apply ist k h) ) relation-algebra-v.1.7.9/src/mrewrite.mli000066400000000000000000000002051440504774100203370ustar00rootroot00000000000000val extend : Geninterp.interp_sign -> Ltac_plugin.Tacinterp.value -> [< `LR | `RL ] -> EConstr.constr -> unit Proofview.tactic relation-algebra-v.1.7.9/src/mrewrite_g.mlg000066400000000000000000000005011440504774100206420ustar00rootroot00000000000000DECLARE PLUGIN "coq-relation-algebra.mrewrite" { open Ltac_plugin open Stdarg open Tacarg open Mrewrite } TACTIC EXTEND ra_extend_lr | [ "ra_extend" tactic(k) "->" constr(h) ] -> { extend ist k `LR h } END TACTIC EXTEND ra_extend_rl | [ "ra_extend" tactic(k) "<-" constr(h) ] -> { extend ist k `RL h } END relation-algebra-v.1.7.9/src/mrewrite_g.mli000066400000000000000000000000001440504774100206360ustar00rootroot00000000000000relation-algebra-v.1.7.9/src/packed_fold.mlpack000066400000000000000000000000141440504774100214200ustar00rootroot00000000000000Fold Fold_g relation-algebra-v.1.7.9/src/packed_kat.mlpack000066400000000000000000000000521440504774100212550ustar00rootroot00000000000000Kat_dec Kat_reification Kat_reification_g relation-algebra-v.1.7.9/src/packed_mrewrite.mlpack000066400000000000000000000000241440504774100223330ustar00rootroot00000000000000Mrewrite Mrewrite_g relation-algebra-v.1.7.9/src/packed_reification.mlpack000066400000000000000000000000321440504774100227700ustar00rootroot00000000000000Reification Reification_g relation-algebra-v.1.7.9/src/plugins.mlpack000066400000000000000000000000071440504774100206500ustar00rootroot00000000000000Common relation-algebra-v.1.7.9/src/reification.ml000066400000000000000000000177111440504774100206360ustar00rootroot00000000000000(*i camlp4deps: "parsing/grammar.cma" i*) (*i camlp4use: "pa_extend.cmo" i*) (** reification plugin, for the [ra_normalise], [ra_simpl], and [ra] tactics: we reify [monoid] and [lattice] operations into [syntax.expr] expressions *) open Plugins.Common open Constr open EConstr (* RelationAlgebra.mono Coq module *) (* module Mono = struct *) (* let path = ra_path@["mono"] *) (* let mono = get_fun_3 path "mono" *) (* end *) let retype c = Proofview.Goal.enter begin fun gl -> let (sigma, _) = Typing.type_of (Tacmach.pf_env gl) (Tacmach.project gl) c in Proofview.Unsafe.tclEVARS sigma end module Syntax = Make_Syntax(struct let typ = Pos.t end) module Tbl : sig type t (* create an empty table *) val create: unit -> t (* [insert gl t x y] adds the association [x->y] to [t] and returns the corresponding (coq) index ; [gl] is the current goal, used to compare terms *) val insert: Environ.env -> Evd.evar_map -> t -> constr -> constr -> constr (* [to_env t typ def] returns (coq) environment corresponding to [t], yielding elements of type [typ], with [def] as default value *) val to_env: t -> constr -> constr -> constr end = struct type t = ((constr*constr*constr) list * int) ref let create () = ref([],1) let rec find env sigma x = function | [] -> raise Not_found | (x',i,_)::q -> if convertible env sigma x x' then i else find env sigma x q let insert env sigma t x y = let l,i = !t in try find env sigma x l with Not_found -> let j = Pos.of_int i in t := ((x,j,y)::l,i+1); j let to_env t typ def = match fst !t with | [] -> mkLambda (Context.anonR,Lazy.force Pos.t,def) | [_,_,x] -> mkLambda (Context.anonR,Lazy.force Pos.t,x) | (_,_,x)::q -> Pos.sigma_get typ x (List.fold_left (fun acc (_,i,x) -> Pos.sigma_add typ i x acc ) (Pos.sigma_empty typ) q ) end (** main entry point: reification of the current goal. [l] is the level at which reification should be performed; this tactic simply converts the goal into a sequence of "let ... in", so that we can later get all reification ingredients from Ltac, just by doing "intros ..." *) let reify_goal l = Proofview.Goal.enter begin fun goal -> let env0 = Tacmach.pf_env goal in let sigma = Tacmach.project goal in let concl = Tacmach.pf_concl goal in (* getting the level *) let l = read_level env0 sigma l in (* variables for referring to the environments *) let tenv_n,tenv_ref = fresh_name env0 "tenv" in let env_n,env_ref = fresh_name env0 "env" in (* table associating indices to encountered types *) let tenv = Tbl.create() in let insert_type t = Tbl.insert env0 sigma tenv t t in (* table associating indices to encountered atoms *) let env = Tbl.create() in let insert_atom ops x s s' t = Tbl.insert env0 sigma env x ((* lazy *) ( (* let m = *) (* try if s<>t then raise Not_found else *) (* let h = resolve_one_typeclass goal (Mono.mono ops s' x) in *) (* Syntax.im_true ops tenv_ref s x h *) (* with Not_found -> Syntax.im_false ops tenv_ref s t x *) (* in *) Syntax.pack ops tenv_ref s t x (* m *) )) in (* get the (in)equation *) let rel,lops,lhs,rhs = match kind sigma (Termops.strip_outer_cast sigma concl) with | App(c,ca) when EConstr.eq_constr sigma c (Lazy.force Lattice.leq_or_weq) -> mkApp (c,[|ca.(0);ca.(1)|]), ca.(1), ca.(2), ca.(3) | App(c,ca) when EConstr.eq_constr sigma c (Lazy.force Lattice.leq) || EConstr.eq_constr sigma c (Lazy.force Lattice.weq) -> mkApp (c,[|ca.(0)|]), ca.(0), ca.(1), ca.(2) | _ -> error "unrecognised goal" in (* get the monoid.ops and the domain/codomain types *) let ops,src',tgt' = match kind sigma (Termops.strip_outer_cast sigma lops) with | App(c,ca) when EConstr.eq_constr sigma c (Lazy.force Monoid.mor0) -> ca.(0),ca.(1),ca.(2) | _ -> error "could not find monoid operations" in let src = insert_type src' in let tgt = insert_type tgt' in let pck = Syntax.pack_type ops tenv_ref in (* type of packed elements *) let typ = Monoid.ob ops in (* type of types *) let src_v,(src_n,src_) = Syntax.src_ ops tenv_ref env_ref, fresh_name env0 "src" in let tgt_v,(tgt_n,tgt_) = Syntax.tgt_ ops tenv_ref env_ref, fresh_name env0 "tgt" in let es = env0, sigma in let is_pls s' t' = is_cup es l (Monoid.mor ops s' t') in let is_cap s' t' = is_cap es l (Monoid.mor ops s' t') in let is_neg s' t' = is_neg es l (Monoid.mor ops s' t') in let is_dot = is_dot es ops insert_type in let is_itr = is_itr es l ops in let is_str = is_str es l ops in let is_cnv = is_cnv es l ops in let is_ldv = is_ldv es l ops insert_type in let is_rdv = is_rdv es l ops insert_type in (* reification of a term [e], with domain [s] and codomain [t] *) let rec reify (s,s' as ss) (t,t' as tt) e = let k' _ = (* conversion here is untyped, so we need to ensure s' = t' when recognizing one *) if convertible env0 sigma e (Monoid.one ops s') && convertible env0 sigma s' t' then Syntax.one src_ tgt_ s else if l.has_bot && convertible env0 sigma e (Lattice.bot (Monoid.mor ops s' t')) then Syntax.zer src_ tgt_ s t else if l.has_top && convertible env0 sigma e (Lattice.top (Monoid.mor ops s' t')) then Syntax.top src_ tgt_ s t else Syntax.var src_ tgt_ (insert_atom ops e s s' t) in match kind sigma (Termops.strip_outer_cast sigma e) with App(c,ca) -> (* note that we give priority to dot/one over cap/top (they coincide on flat structures) *) is_dot s s' (fun x r r' y -> Syntax.dot src_ tgt_ s r t (reify ss (r,r') x) (reify (r,r') tt y)) ( is_pls s' t' (fun x y -> Syntax.pls src_ tgt_ s t (reify ss tt x) (reify ss tt y)) ( is_cap s' t' (fun x y -> Syntax.cap src_ tgt_ s t (reify ss tt x) (reify ss tt y)) ( is_neg s' t' (fun x -> Syntax.neg src_ tgt_ s t (reify ss tt x)) ( is_itr s' (fun x -> Syntax.itr src_ tgt_ s (reify ss ss x)) ( is_str s' (fun x -> Syntax.str src_ tgt_ s (reify ss ss x)) ( is_cnv s' t' (fun x -> Syntax.cnv src_ tgt_ t s (reify tt ss x)) ( is_ldv s s' (fun x r r' y -> Syntax.ldv src_ tgt_ r s t (reify (r,r') ss x) (reify (r,r') tt y)) ( is_rdv s s' (fun x r r' y -> Syntax.rdv src_ tgt_ r t s (reify tt (r,r') x) (reify ss (r,r') y)) ( k'))))))))) (c,ca,Array.length ca) | _ -> k' () in (* reification of left and right members *) let lhs_v,(lhs_n,lhs) = reify (src,src') (tgt,tgt') lhs, fresh_name env0 "lhs" in let rhs_v,(rhs_n,rhs) = reify (src,src') (tgt,tgt') rhs, fresh_name env0 "rhs" in (* apply "eval" around the reified terms *) let lhs = Syntax.eval ops tenv_ref env_ref src tgt lhs in let rhs = Syntax.eval ops tenv_ref env_ref src tgt rhs in let x = Syntax.expr src_ tgt_ src tgt in (* construction of coq' types index *) let tenv = Tbl.to_env tenv typ src' in (* construction of coq' reification environment *) let env = let def = let one = Monoid.one ops src' in Syntax.pack ops tenv_ref src src one (* (Syntax.im_false ops tenv_ref src src one) *) in Tbl.to_env env pck def in (* reified goal conclusion: add the relation over the two evaluated members *) let reified = mkNamedLetIn sigma tenv_n tenv (mkArrowR (Lazy.force Pos.t) typ) ( mkNamedLetIn sigma env_n env (mkArrowR (Lazy.force Pos.t) pck) ( mkNamedLetIn sigma src_n src_v (mkArrowR (Lazy.force Pos.t) (Lazy.force Pos.t)) ( mkNamedLetIn sigma tgt_n tgt_v (mkArrowR (Lazy.force Pos.t) (Lazy.force Pos.t)) ( mkNamedLetIn sigma lhs_n lhs_v x ( mkNamedLetIn sigma rhs_n rhs_v x ( (mkApp (rel, [|lhs;rhs|])))))))) in Proofview.tclORELSE (Tacticals.tclTHEN (retype reified) (Tactics.convert_concl ~cast:false ~check:true reified DEFAULTcast)) (fun (e, info) -> Feedback.msg_warning (Printer.pr_leconstr_env (fst es) (snd es) reified); Proofview.tclZERO ~info e) end relation-algebra-v.1.7.9/src/reification.mli000066400000000000000000000000641440504774100210000ustar00rootroot00000000000000val reify_goal : EConstr.t -> unit Proofview.tactic relation-algebra-v.1.7.9/src/reification_g.mlg000066400000000000000000000003321440504774100213020ustar00rootroot00000000000000DECLARE PLUGIN "coq-relation-algebra.reification" { open Stdarg open Ltac_plugin open Reification } (* tactic grammar entries *) TACTIC EXTEND ra_reify | [ "ra_reify" constr(level) ] -> { reify_goal level } END relation-algebra-v.1.7.9/src/reification_g.mli000066400000000000000000000000001440504774100212740ustar00rootroot00000000000000relation-algebra-v.1.7.9/theories/000077500000000000000000000000001440504774100170345ustar00rootroot00000000000000relation-algebra-v.1.7.9/theories/all.v000066400000000000000000000021721440504774100177750ustar00rootroot00000000000000(** Import everything except the examples *) Require Export common. Require Export comparisons. Require Export positives. Require Export ordinal. Require Export denum. Require Export pair. Require Export powerfix. Require Export level. Require Export lattice. Require Export monoid. Require Export kleene. Require Export factors. Require Export kat. Require Export rewriting. Require Export move. Require Export lsyntax. Require Export syntax. Require Export normalisation. Require Export prop. Require Export boolean. Require Export rel. Require Export srel. Require Export lang. Require Export lset. Require Export sups. Require Export sums. Require Export matrix. Require Export matrix_ext. Require Export untyping. Require Export regex. Require Export rmx. Require Export bmx. Require Export dfa. Require Export nfa. Require Export ka_completeness. Require Export atoms. Require Export traces. Require Export glang. Require Export gregex. Require Export kat_completeness. Require Export ugregex. Require Export ugregex_dec. Require Export kat_untyping. Require Export kat_reification. Require Export kat_tac. Require Export relalg. relation-algebra-v.1.7.9/theories/atoms.v000066400000000000000000000143411440504774100203510ustar00rootroot00000000000000(** * atoms: atoms of the free Boolean lattice over a finite set *) (** An atom is an expression that cannot be decomposed into a non-trivial disjunction. When the set of variables is finite, the atoms are the complete conjunctions of literals, and any expression can be decomposed as a sum of atoms, in a unique way. We work with ordinals to get the finiteness property for free: the set of variables is [ord n], for some natural number [n]. Atoms are in bijection with [ord n -> bool], and thus, [ord (pow2 n)]. *) Require Import lattice lsyntax comparisons lset boolean sups. Set Implicit Arguments. (** Atom corresponding to a subset of variables, encoded as an ordinal *) Definition atom {n} (f: ord (pow2 n)): expr_ops (ord n) BL := sup (X:=dual (expr_ops _ BL)) (fun i => if set.mem f i then e_var i else ! e_var i) (seq n). (** * Decomposition of the [top] element into atoms *) (** Alternative definition of the [top] element, as the sum of all atoms the first step consists in proving that this is actaully the [top] element. *) Definition e_top' n: expr_ BL := \sup_(a e_var (ordS i)) (atom f). Proof. unfold atom. simpl. rewrite set.mem_xO_0. apply cap_weq. reflexivity. setoid_rewrite eval_inf with (g := fun i => e_var (ordS i)). rewrite sup_map. apply (sup_weq (l:=BL) (L:=lattice.dual_laws _ _ _)). 2: reflexivity. intro i. rewrite set.mem_xO_S. now case set.mem. Qed. Lemma atom_xI n (f: ord (pow2 n)): @atom (S n) (set.xI f) ≡ e_var ord0 ⊓ eval (fun i => e_var (ordS i)) (atom f). Proof. unfold atom. simpl. rewrite set.mem_xI_0. apply cap_weq. reflexivity. setoid_rewrite eval_inf with (g := fun i => e_var (ordS i)). rewrite sup_map. apply (sup_weq (l:=BL) (L:=lattice.dual_laws _ _ _)). 2: reflexivity. intro i. rewrite set.mem_xI_S. now case set.mem. Qed. (** the deomposition of [top] into atoms follow by induction *) Theorem decomp_top n: top ≡ e_top' n. Proof. unfold e_top'. induction n. symmetry. apply cupxb. simpl pow2. rewrite seq_double, sup_app. rewrite 2sup_map. setoid_rewrite atom_xO. setoid_rewrite atom_xI. rewrite <- 2capxsup. rewrite capC, (capC (e_var _)), <- capcup. rewrite cupC, cupneg, capxt. intros X L f. simpl. rewrite (IHn X L (fun i => f (ordS i))). rewrite 2eval_sup. apply sup_weq. 2: reflexivity. intro i. induction (atom i); first [reflexivity|apply cup_weq|apply cap_weq|apply neg_weq]; assumption. Qed. (** * Decomposition of the other expressions into atoms *) Section atom_props. Variable n: nat. Notation Atom := (ord (pow2 n)). (** auxiliary lemmas *) Lemma cap_var_atom (a: Atom) b: e_var b ⊓ atom a ≡ (if set.mem a b then atom a else bot). Proof. generalize (in_seq b). unfold atom. induction (seq n). intros []. simpl (sup _ _). intros [->|Hl]. case set.mem. lattice. rewrite capA, capneg. apply capbx. rewrite capA, (capC (e_var _)), <-capA, IHl by assumption. case (set.mem a b). reflexivity. apply capxb. Qed. Lemma cup_var_atom (a: Atom) b: e_var b ⊔ !atom a ≡ (if set.mem a b then top else !atom a). Proof. generalize (in_seq b). unfold atom. induction (seq n). intros []. simpl (sup _ _). intros [->|Hl]. case set.mem. rewrite negcap, cupA, cupneg. apply cuptx. rewrite negcap, negneg. lattice. rewrite negcap at 1. rewrite cupA, (cupC (e_var _)), <-cupA, IHl by assumption. case (set.mem a b). apply cupxt. now rewrite negcap. Qed. Lemma eval_mem_cap (a: Atom) e: e ⊓ atom a ≡ if eval (set.mem a) e then atom a else bot with eval_mem_cup (a: Atom) e: e ⊔ !atom a ≡ if eval (set.mem a) e then top else !atom a. Proof. - destruct e; simpl eval. apply capbx. apply captx. rewrite capC, capcup, capC, eval_mem_cap, capC, eval_mem_cap. case (eval (set.mem a) e1); case (eval (set.mem a) e2); lattice. transitivity ((e1 ⊓ atom a) ⊓ (e2 ⊓ atom a)). lattice. rewrite 2eval_mem_cap. case (eval (set.mem a) e1); case (eval (set.mem a) e2); lattice. neg_switch. rewrite negcap, negneg, eval_mem_cup. case (eval (set.mem a) e). now rewrite <-negbot. reflexivity. apply cap_var_atom. - destruct e; simpl eval. apply cupbx. apply cuptx. transitivity ((e1 ⊔ !atom a) ⊔ (e2 ⊔ !atom a)). lattice. rewrite 2eval_mem_cup. case (eval (set.mem a) e1); case (eval (set.mem a) e2); lattice. rewrite cupC, cupcap, cupC, eval_mem_cup, cupC, eval_mem_cup. case (eval (set.mem a) e1); case (eval (set.mem a) e2); lattice. neg_switch. rewrite negcup, 2negneg, eval_mem_cap. case (eval (set.mem a) e). apply negneg. apply negtop. apply cup_var_atom. Qed. (** decomposition of arbitrary expressions *) Theorem decomp_expr e: e ==_[BL] \sup_(i \in filter (fun f => eval (set.mem f) e) (seq (pow2 n))) atom i. Proof. rewrite <-capxt. setoid_rewrite decomp_top. setoid_rewrite capxsup. setoid_rewrite eval_mem_cap. induction (seq (pow2 n)). reflexivity. simpl filter. simpl (sup _ _). case (eval (set.mem a) e). now apply cup_weq. now rewrite cupbx. Qed. (** * Atoms are pairwise disjoint *) Lemma eval_atom (a b: Atom): eval (set.mem a) (atom b) -> a=b. Proof. intro. apply set.ext. intro i. unfold atom in H. setoid_rewrite eval_inf with (g := set.mem a) in H. rewrite is_true_inf in H; cbv beta in H. specialize (H i (in_seq i)). destruct (set.mem b i). assumption. apply Bool.negb_true_iff. assumption. Qed. Lemma empty_atom_cap (a b: Atom): a<>b -> atom a ⊓ atom b ≡ bot. Proof. intro E. rewrite eval_mem_cap. generalize (eval_atom b a). case (eval (set.mem b) (atom a)). 2: reflexivity. intro. elim E. symmetry; auto. Qed. End atom_props. relation-algebra-v.1.7.9/theories/bmx.v000066400000000000000000000073301440504774100200140ustar00rootroot00000000000000(** * bmx: Boolean matrices, characterisation of reflexive transitive closure *) Require Import kleene boolean sups matrix. Set Implicit Arguments. Notation bmx := (mx_ops bool_ops bool_tt). (** intermediate alternative definition of the star of a Boolean matrix *) Fixpoint bmx_str n: bmx n n -> bmx n n := match n with | O => fun M => M | S n => fun M => let b := sub01_mx (n1:=1) (m1:=1) M in let c := sub10_mx (n1:=1) (m1:=1) M in let d := bmx_str (sub11_mx (n1:=1) (m1:=1) M) in blk_mx 1 (b⋅d) (d⋅c) (d+d⋅c⋅(b⋅d)) end. Lemma bmx_top_1: top ≡ (1: bmx 1%nat 1%nat). Proof. intros i j. now setoid_rewrite ord0_unique. Qed. Lemma bmx_str_str n (M: bmx n n): M^* ≡ bmx_str M. Proof. induction n as [|n IHn]. intro i. elim (ord_0_empty i). change (M^*) with (mx_str _ _ _ M). simpl mx_str. simpl bmx_str. unfold mx_str_build. ra_fold (mx_ops bool_ops bool_tt). rewrite bmx_top_1. now rewrite IHn, dot1x, dotx1. Qed. (** reflexive transitive closure as an inductive predicate *) Inductive rt_clot n (M: bmx n n): ord n -> ord n -> Prop := | clot_nil: forall i, rt_clot M i i | clot_cons: forall i j k, M i j -> rt_clot M j k -> rt_clot M i k. Lemma clot_app n (M: bmx n n): forall i j k, rt_clot M i j -> rt_clot M j k -> rt_clot M i k. Proof. induction 1; eauto using clot_cons. Qed. Lemma clot_snoc n (M: bmx n n): forall i j k, rt_clot M i j -> M j k -> rt_clot M i k. Proof. intros. eapply clot_app. eassumption. eapply clot_cons. eassumption. constructor. Qed. Lemma rt_clot_S_S n (M: bmx (1+n)%nat (1+n)%nat): forall i j, rt_clot (sub11_mx M) i j -> rt_clot M (rshift i) (rshift j). Proof. induction 1. constructor. eapply clot_cons; eassumption. Qed. (** characterisation theorem *) Theorem bmx_str_clot n (M: bmx n n) i j: M^* i j <-> rt_clot M i j. Proof. split. - assert (M^* i j ≡ bmx_str M i j). (*MS: FIXME, why is it needed now ? *) apply bmx_str_str. rewrite H. clear H. revert i j. induction n as [|n IH]; intros i' j'. simpl. intro. eapply clot_cons. eassumption. constructor. unfold bmx_str; fold (@bmx_str n). set (M' := sub11_mx (n1:=1) (m1:=1) M). specialize (IH M'). unfold blk_mx, row_mx, col_mx. case ordinal.split_spec; intros i ->; case ordinal.split_spec; intros j -> Hij. + setoid_rewrite ord0_unique. constructor. + setoid_rewrite is_true_sup in Hij. destruct Hij as [k [_ Hk]]. apply Bool.andb_true_iff in Hk as [Hik Hkj]. apply IH in Hkj. unfold M' in Hkj. eapply clot_cons. eassumption. now apply rt_clot_S_S. + setoid_rewrite is_true_sup in Hij. destruct Hij as [k [_ Hk]]. apply Bool.andb_true_iff in Hk as [Hik Hkj]. apply IH in Hik. unfold M' in Hik. eapply clot_snoc. apply rt_clot_S_S; eassumption. assumption. + setoid_rewrite Bool.orb_true_iff in Hij. destruct Hij as [Hij|Hij]. apply IH in Hij. now apply rt_clot_S_S. setoid_rewrite is_true_sup in Hij. destruct Hij as [k [_ Hk]]. apply Bool.andb_true_iff in Hk as [Hik Hkj]. setoid_rewrite is_true_sup in Hik. destruct Hik as [i' [_ Hi']]. apply Bool.andb_true_iff in Hi' as [Hii' Hi'k]. setoid_rewrite is_true_sup in Hkj. destruct Hkj as [j' [_ Hj']]. apply Bool.andb_true_iff in Hj' as [Hkj' Hj'j]. apply IH in Hii'. apply IH in Hj'j. eapply clot_app. apply rt_clot_S_S, Hii'. eapply clot_cons. apply Hi'k. eapply clot_cons. apply Hkj'. apply rt_clot_S_S, Hj'j. - induction 1 as [i|i j k Hij Hjk IH]. + pose proof (str_refl (X:=bmx) M i i). simpl in H. apply H. unfold mx_one. now rewrite eqb_refl. + pose proof (str_cons (X:=bmx) M i k). simpl in H. apply H. clear H. unfold mx_dot. rewrite is_true_sup. eexists. split. apply in_seq. apply Bool.andb_true_iff. split; eassumption. Qed. relation-algebra-v.1.7.9/theories/boolean.v000066400000000000000000000074621440504774100206530ustar00rootroot00000000000000(** * boolean: Booleans as a lattice, and as a monoid *) Require Import monoid prop sups. (** * Booleans as a lattice *) Canonical Structure bool_lattice_ops: lattice.ops := {| leq := le_bool; weq := eq; cup := orb; cap := andb; neg := negb; bot := false; top := true |}. (** [is_true] is a bounded distributive lattice homomorphism from [bool] to [Prop]. (Actually a Boolean lattice homomorphism, but we don't need it here.) *) #[export] Instance mm_bool_Prop: morphism BDL is_true. Proof. constructor; simpl. now auto. intros ? ?. now rewrite eq_bool_iff. intros _ [|] [|]; firstorder. intros _ [|] [|]; firstorder. intros _. easy. tauto. intros _ [|]. firstorder auto with bool. easy. Qed. (* #[export] Instance mm_negb l: morphism l bool_lops (dual_ops bool_ops) negb. *) (** we get most lattice laws by the faithful embedding into [Prop] *) #[export] Instance bool_lattice_laws: lattice.laws (BL+STR+CNV+DIV) bool_lattice_ops. Proof. assert(H: lattice.laws BDL bool_lattice_ops). apply (laws_of_injective_morphism is_true mm_bool_Prop). auto. intros x y. apply eq_bool_iff. constructor; try apply H; (try now left); intros _ [|]; reflexivity. Qed. (** simple characterisation of finite sups and infs in [bool] *) Lemma is_true_sup I J (f: I -> bool): \sup_(i\in J) f i <-> (exists i, List.In i J /\ f i). Proof. unfold is_true. induction J; simpl. firstorder; discriminate. rewrite Bool.orb_true_iff. firstorder congruence. Qed. Lemma is_true_inf I J (f: I -> bool): \inf_(i\in J) f i <-> (forall i, List.In i J -> f i). Proof. unfold is_true. induction J; simpl. firstorder. rewrite Bool.andb_true_iff. firstorder congruence. Qed. (** * Boolean as a (flat) monoid this is useful: - to construct boolean matrices, - to consider regex.epsilon as a functor) *) (** this monoid is flat: this is a one object category. We use the following singleton type to avoid confusion with the singleton types of other flat structures *) CoInductive bool_unit := bool_tt. (** note that the trivial type information is simply ignored *) Canonical Structure bool_ops: monoid.ops := {| ob := bool_unit; mor n m := bool_lattice_ops; dot n m p := andb; one n := true; itr n x := x; str n x := true; cnv n m x := x; ldv n m p x y := !x ⊔ y; rdv n m p x y := !x ⊔ y |}. (** shorthand for [bool], when a morphism is expected *) Notation bool' := (bool_ops bool_tt bool_tt). (** we actually have all laws on [bool] *) #[export] Instance bool_laws: laws (BL+STR+CNV+DIV) bool_ops. Proof. constructor; (try now left);repeat right; intros. apply bool_lattice_laws. apply capA. apply captx. apply weq_leq. simpl. apply capC. reflexivity. now intros ? ? ?. reflexivity. all: try setoid_rewrite <- le_bool_spec. all: try case x; try case y; try case z; reflexivity. Qed. (** * properties of the [ofbool] injection *) Section ofbool. Open Scope bool_scope. Implicit Types a b c: bool. Context {X: ops} {l} {L: laws l X} {n: ob X}. Notation ofbool := (@ofbool X n). Lemma andb_dot `{BOT ≪ l} a b: ofbool (a&&b) ≡ ofbool a ⋅ ofbool b. Proof. symmetry. case a. apply dot1x. apply antisym. now apply weq_leq, dot0x. apply leq_bx. Qed. Lemma orb_pls `{CUP+BOT ≪ l} a b: ofbool (a||b) ≡ ofbool a + ofbool b. Proof. symmetry. case a; simpl. case b; simpl; lattice. lattice. Qed. #[export] Instance ofbool_leq `{BOT ≪ l}: Proper (leq ==> leq) ofbool. Proof. intros [|] b E; simpl. now rewrite E. apply leq_bx. Qed. Lemma dot_ofboolx `{BOT ≪ l} b (x: X n n): ofbool b⋅x ≡ x⋅ofbool b. Proof. case b; simpl. now rewrite dot1x, dotx1. now rewrite dot0x, dotx0. Qed. End ofbool. (** [is_true] is also monotone *) #[export] Instance is_true_leq: Proper (leq ==> leq) is_true. Proof. intros [|] b E; simpl. now rewrite E. discriminate. Qed. relation-algebra-v.1.7.9/theories/common.v000066400000000000000000000067141440504774100205230ustar00rootroot00000000000000(** * common: basic modules, utilities, and tactics *) Require Export Setoid Morphisms. Require Import BinNums. Set Implicit Arguments. Bind Scope nat_scope with nat. (** this lemma is useful when applied in hypotheses ([apply apply in H] makes it possible to specialize an hypothesis [H] by generating the corresponding subgoals) *) Definition apply X x Y (f: X -> Y) := f x. (** for debugging *) Ltac print_goal := match goal with |- ?x => idtac x end. (** closing tactic *) Ltac Now tac := tac; solve [auto]. (** coercion from Booleans to propositions *) Coercion is_true: bool >-> Sortclass. (* TOTHINK: do we really want to pollute the core database? *) #[export] Hint Unfold is_true : core. (** lazy Boolean connectives *) Notation "a <<< b" := (if (a:bool) then (b:bool) else true) (at level 49). Notation "a &&& b" := (if (a:bool) then (b:bool) else false) (right associativity, at level 59). Notation "a ||| b" := (if (a:bool) then true else (b:bool)) (right associativity, at level 60). (** Booleans inclusion *) Definition le_bool (a b : bool) := a -> b. #[export] Hint Unfold le_bool : core. (** specification in Prop of the above operations *) Lemma le_bool_spec a b: is_true (a<< le_bool a b. Proof. case a; intuition. discriminate. Qed. Lemma landb_spec a b: is_true (a&&&b) <-> a /\ b. Proof. case a; intuition. discriminate. Qed. Lemma lorb_spec a b: is_true (a|||b) <-> a \/ b. Proof. case a; intuition. discriminate. Qed. Lemma negb_spec a: is_true (negb a) <-> a = false. Proof. case a; intuition. Qed. Lemma eq_bool_iff (a b: bool): (a<->b) <-> a=b. Proof. split. unfold is_true. 2: now intros <-. case a; case b; intuition discriminate. Qed. (** coercion from sums to Booleans *) Definition bool_of_sumbool A B (c: sumbool A B): bool := if c then true else false. Coercion bool_of_sumbool: sumbool >-> bool. Lemma sumbool_true A (c: sumbool A (~A)): A -> c. Proof. intro HA. case c; intro F. reflexivity. elim (F HA). Qed. Lemma is_true_sumbool A (c: {A}+{~A}): is_true c <-> A. Proof. case c; simpl; intuition; discriminate. Qed. Lemma sumbool_iff A B: (A<->B) -> {A}+{~A} -> {B}+{~B}. Proof. tauto. Qed. (** simplifictation hints *) Arguments Basics.flip {_ _ _} _ _ _/. Arguments Basics.impl _ _ /. Arguments Proper {_} _ _ /. Arguments respectful {_ _} _ _ / _ _. Arguments pointwise_relation _ {_} _ / _ _. Arguments Transitive {_} _ /. Arguments Symmetric {_} _ /. Arguments Reflexive {_} _ /. Notation flip := Basics.flip. Notation impl := Basics.impl. Notation pwr := (pointwise_relation _). (** opaque identity, used to document impossible cases *) Definition assert_false {A} (x: A): A. Proof. assumption. Qed. (** 2^n (defined through the [double] function to ease definition of finite sets as ordinals) *) Fixpoint double n := match n with 0 => 0 | S n => S (S (double n)) end. Fixpoint pow2 n := match n with 0 => 1 | S n => double (pow2 n) end. (** closing tactics by reflection, without re-computing at Qed-time *) Ltac close_by_reflection val := (abstract (vm_cast_no_check (eq_refl val))). (** introduce non propositional variables *) Ltac intro_vars := match goal with | |- ?H -> _ => match type of H with | Prop => let H' := fresh in intro H'; intro_vars; revert H' | _ => intro; intro_vars end | |- _ => idtac end. (** revert all propositional variables *) Ltac revert_prop := match goal with | H:?t |- _ => match type of t with Prop => revert H; revert_prop end | _ => idtac end. relation-algebra-v.1.7.9/theories/comparisons.v000066400000000000000000000166441440504774100215730ustar00rootroot00000000000000(** * comparisons: types equiped with a comparison function *) Require Import List. Require Import Eqdep Eqdep_dec. Import ListNotations. Set Implicit Arguments. (** * Specifying Boolean *) Inductive reflect (P: Prop): bool -> Set := | reflect_t : P -> reflect P true | reflect_f : ~ P -> reflect P false. (** * Specifying ternary comparisons *) (** note that [Lt] and [Gt] have the same meaning, i.e., not [Eq] *) Inductive compare_spec (P: Prop): comparison -> Set := | compare_eq: P -> compare_spec P Eq | compare_lt: ~P -> compare_spec P Lt | compare_gt: ~P -> compare_spec P Gt. (** turning a comparison function into a Boolean test *) Definition eqb_of_compare A (f: A -> A -> comparison): A -> A -> bool := fun x y => match f x y with Eq => true | _ => false end. Lemma eqb_of_compare_spec A f: (forall a b: A, compare_spec (a=b) (f a b)) -> forall a b, reflect (a=b) (eqb_of_compare f a b). Proof. unfold eqb_of_compare. intros H a b. now case H; constructor. Qed. (** lexicographic ternary comparison *) Notation lex a b := match a with Eq => b | Lt => Lt | Gt => Gt end. Lemma lex_spec P Q R a b (H: R<->P/\Q): compare_spec P a -> compare_spec Q b -> compare_spec R (lex a b). Proof. destruct 1; try (constructor; tauto). destruct 1; constructor; tauto. Qed. Lemma compare_lex_eq a b: lex a b = Eq <-> a = Eq /\ b = Eq. Proof. destruct a; intuition discriminate. Qed. (** * Structure for types equiped with a Boolean equality and a comparison function. Note that the specification of [cmp] is weak: we only have [cmp a b = Eq <-> a=b]. As a consequence, the difference betwen [Lt] and [Gt] can only be used as a heuristic. *) Structure cmpType := mk_cmp { carr:> Set; eqb: carr -> carr -> bool; _: forall x y, reflect (x=y) (eqb x y); cmp: carr -> carr -> comparison; _: forall x y, compare_spec (x=y) (cmp x y) }. Arguments cmp {c} x y. Arguments eqb {c} x y. Lemma eqb_spec (A: cmpType): forall x y: A, reflect (x=y) (eqb x y). Proof. now case A. Qed. Lemma cmp_spec (A: cmpType): forall x y: A, compare_spec (x=y) (cmp x y). Proof. now case A. Qed. (** building comparison types without providing an equality function *) Definition mk_simple_cmp A cmp cmp_spec := @mk_cmp A _ (eqb_of_compare_spec _ cmp_spec) cmp cmp_spec. (** phantom identity to generate cmpTypes by unification (see ssreflect) *) Definition cmp_id (A: cmpType) (X: Set) (_:X -> carr A): cmpType := A. Notation "[ X :cmp]" := (@cmp_id _ X%type (fun x => x)) (at level 0). (** basic properties *) Lemma cmp_eq (A: cmpType) (x y: A): cmp x y = Eq -> x=y. Proof. case cmp_spec; congruence. Qed. Lemma cmp_refl (A: cmpType) (x: A): cmp x x = Eq. Proof. case cmp_spec; congruence. Qed. Lemma eqb_eq (A: cmpType) (x y: A): eqb x y = true -> x = y. Proof. case eqb_spec; congruence. Qed. Lemma eqb_refl (A: cmpType) (x: A): eqb x x = true. Proof. case eqb_spec; congruence. Qed. Lemma eqb_sym (A: cmpType) (x y: A): eqb x y = eqb y x. Proof. case eqb_spec; case eqb_spec; congruence. Qed. Lemma cmp_dec (A: cmpType) (x y: A): {x=y}+{x<>y}. Proof. case (eqb_spec A x y); tauto. Qed. (** equality on cmpTypes being decidable, we get uniqueness of identity proofs and elimination of dependent equality *) Lemma cmp_eq_dep_eq (A: cmpType) (P: A -> Type): forall p (x y: P p), eq_dep A P p x p y -> x = y. Proof. apply eq_dep_eq_dec, cmp_dec. Qed. Lemma cmp_eq_rect_eq (A: cmpType): forall (p: A) Q (x: Q p) (h: p = p), eq_rect p Q x p h = x. Proof. symmetry. apply eq_rect_eq_dec, cmp_dec. Qed. Lemma UIP_cmp (A: cmpType) (p q: A) (x y: p=q): x = y. Proof. apply UIP_dec, cmp_dec. Qed. (** * Natural numbers as a [cmpType] *) Fixpoint eqb_nat i j := match i,j with | O,O => true | S i,S j=> eqb_nat i j | _,_ => false end. Lemma eqb_nat_spec: forall i j, reflect (i=j) (eqb_nat i j). Proof. induction i; intros [|j]; try (constructor; congruence). simpl. case IHi; constructor; congruence. Qed. Fixpoint nat_compare i j := match i,j with | O,O => Eq | S i,S j=> nat_compare i j | O,_ => Lt | _,O => Gt end. Lemma nat_compare_spec: forall i j, compare_spec (i=j) (nat_compare i j). Proof. induction i; intros [|j]; try (constructor; congruence). simpl. case IHi; constructor; congruence. Qed. Canonical Structure cmp_nat := mk_cmp _ eqb_nat_spec _ nat_compare_spec. (** * Booleans as a [cmpType] *) Definition eqb_bool i j := match i,j with | false,false | true,true => true | _,_ => false end. Arguments eqb_bool !i !j/. Lemma eqb_bool_spec: forall i j, reflect (i=j) (eqb_bool i j). Proof. destruct i; destruct j; constructor; congruence. Qed. Definition bool_compare i j := match i,j with | false,false | true,true => Eq | false,true => Lt | true,false => Gt end. Arguments bool_compare !i !j/. Lemma bool_compare_spec: forall i j, compare_spec (i=j) (bool_compare i j). Proof. destruct i; destruct j; constructor; congruence. Qed. Canonical Structure cmp_bool := mk_cmp _ eqb_bool_spec _ bool_compare_spec. (** * Pairs of [cmpType]s *) Section p. Variables A B: cmpType. Definition eqb_pair (x y: A*B) := let '(x1,x2) := x in let '(y1,y2) := y in if (eqb x1 y1) then (eqb x2 y2) else false. Lemma eqb_pair_spec: forall x y, reflect (x=y) (eqb_pair x y). Proof. unfold eqb_pair. intros [? ?] [? ?]; simpl; repeat case eqb_spec; constructor; congruence. Qed. Definition pair_compare (x y: A*B) := let '(x1,x2) := x in let '(y1,y2) := y in lex (cmp x1 y1) (cmp x2 y2). Lemma pair_compare_spec: forall x y, compare_spec (x=y) (pair_compare x y). Proof. unfold pair_compare. intros [? ?] [? ?]; simpl; repeat case cmp_spec; constructor; congruence. Qed. Canonical Structure cmp_pair := mk_cmp _ eqb_pair_spec _ pair_compare_spec. End p. (** * Sums of [cmpType]s *) Section s. Variables A B: cmpType. Definition eqb_sum (x y: A+B) := match x,y with | inl x,inl y | inr x,inr y => eqb x y | _,_ => false end. Lemma eqb_sum_spec: forall x y, reflect (x=y) (eqb_sum x y). Proof. unfold eqb_sum. intros [?|?] [?|?]; simpl; try case eqb_spec; constructor; congruence. Qed. Definition sum_compare (x y: A+B) := match x,y with | inl x,inl y | inr x,inr y => cmp x y | inl _,inr _ => Lt | inr _,inl _ => Gt end. Lemma sum_compare_spec: forall x y, compare_spec (x=y) (sum_compare x y). Proof. unfold sum_compare. intros [?|?] [?|?]; simpl; try case cmp_spec; constructor; congruence. Qed. Canonical Structure cmp_sum := mk_cmp _ eqb_sum_spec _ sum_compare_spec. End s. (** * Lists of a [cmpType] *) Section l. Variables A: cmpType. Fixpoint eqb_list (h k: list A) := match h,k with | nil, nil => true | u::h, v::k => if eqb u v then eqb_list h k else false | _, _ => false end. Fixpoint list_compare (h k: list A) := match h,k with | nil, nil => Eq | nil, _ => Lt | _, nil => Gt | u::h, v::k => lex (cmp u v) (list_compare h k) end. Lemma eqb_list_spec: forall h k, reflect (h=k) (eqb_list h k). Proof. induction h as [|x h IH]; destruct k; simpl; try case eqb_spec; try case IH; constructor; congruence. Qed. Lemma list_compare_spec: forall h k, compare_spec (h=k) (list_compare h k). Proof. induction h as [|x h IH]; destruct k; simpl; try case cmp_spec; try case IH; constructor; congruence. Qed. Canonical Structure cmp_list := mk_cmp _ eqb_list_spec _ list_compare_spec. End l. relation-algebra-v.1.7.9/theories/denum.v000066400000000000000000000037111440504774100203350ustar00rootroot00000000000000(** * denum: retracting various countable types into positives *) Require Import common positives ordinal. Set Implicit Arguments. (** * Sums *) Definition mk_sum (x: positive+positive) := match x with | inl p => xO p | inr p => xI p end. Definition get_sum x := match x with | xO p => inl p | xI p => inr p | _ => assert_false (inl xH) end. Lemma get_mk_sum x: get_sum (mk_sum x) = x. Proof. now destruct x. Qed. (** * Pairs *) Fixpoint xpair y x := match x with | xH => xI (xO y) | xO x => xO (xO (xpair y x)) | xI x => xI (xI (xpair y x)) end. Definition mk_pair (x: positive*positive) := xpair (snd x) (fst x). Fixpoint get_pair x := match x with | xI (xO p) => (xH,p) | xO (xO x) => let '(x,y) := get_pair x in (xO x,y) | xI (xI x) => let '(x,y) := get_pair x in (xI x,y) | _ => assert_false (xH,xH) end. Lemma get_mk_pair x: get_pair (mk_pair x) = x. Proof. destruct x as [x y]. unfold mk_pair. simpl. induction x; simpl; now rewrite ?IHx. Qed. (** * Natural numbers *) (** we use a much simpler function than the standard bijection, since we only need a retract *) Definition mk_nat := nat_rec (fun _=>positive) xH (fun _ => xO). Fixpoint get_nat x := match x with | xH => O | xO x => S (get_nat x) | _ => assert_false O end. Lemma get_mk_nat x: get_nat (mk_nat x) = x. Proof. induction x; simpl; now rewrite ?IHx. Qed. (** * Ordinals *) Definition mk_ord n (x: ord n) := mk_nat x. (** get_ord returns an option since [n] could be 0, this is not problematic in practice *) Definition get_ord n (x: positive): option (ord n). set (y:=get_nat x). case (lt_ge_dec y n). intro Hy. exact (Some (Ord y Hy)). intros _. exact None. Defined. Lemma get_mk_ord n x: get_ord n (mk_ord x) = Some x. Proof. unfold mk_ord, get_ord. destruct x as [i Hi]; simpl. rewrite get_mk_nat. case lt_ge_dec. intro. f_equal. now apply eq_ord. rewrite Hi at 1. discriminate. Qed. relation-algebra-v.1.7.9/theories/dfa.v000066400000000000000000000137051440504774100177630ustar00rootroot00000000000000(** * dfa: Deterministic Finite Automata, decidability of language inclusion *) Require Import comparisons positives ordinal pair lset. Require Import monoid boolean prop sups bmx. Set Implicit Arguments. Unset Printing Implicit Defensive. (** * DFA and associated language *) (** A DFA is given by its number of states, a deterministic transition function, an acceptance condition, and a finite subset of the alphabet. States are represented by ordinals of the appropriate size. Making the finite subset of the alphabet explicit avoids us to use ordinals for the alphabet. *) Record t := mk { n: nat; u: ord n; M: ord n -> positive -> ord n; v: ord n -> bool; vars: list positive }. Notation "x ^u" := (u x) (at level 2, left associativity, format "x ^u"). Notation "x ^M" := (M x) (at level 2, left associativity, format "x ^M"). Notation "x ^v" := (v x) (at level 2, left associativity, format "x ^v"). (** changing the initial state *) Definition reroot A i := mk i A^M A^v (vars A). Lemma reroot_id A: A = reroot A (A^u). Proof. destruct A; reflexivity. Qed. (** language of a DFA [A], starting from state [i] *) Fixpoint lang A i w := match w with | nil => is_true (A^v i) | cons a w => In a (vars A) /\ lang A (A^M i a) w end. (** * Reduction of DFA language inclusion to DFA language emptiness *) Section diff. Variables A B: t. (** automaton for [A\B] *) Definition diff := mk (pair.mk (u A) (u B)) (fun p a => pair.mk (M A (pair.pi1 p) a) (M B (pair.pi2 p) a)) (fun p => v A (pair.pi1 p) ⊓ ! v B (pair.pi2 p)) (vars A). (** specification of its language *) Lemma diff_spec: vars A ≦ vars B -> forall i j, lang A i ≦ lang B j <-> lang diff (pair.mk i j) ≦ bot. Proof. intro H. cut (forall w i j, lang A i w ≦ lang B j w <-> ~ lang diff (pair.mk i j) w). intros G i j. split. intros Hij w Hw. apply G in Hw as []. apply Hij. intros Hij w. apply G. intro Hw. elim (Hij _ Hw). induction w; intros i j; simpl lang; rewrite pair.pi1mk, pair.pi2mk. case (v A i); case (v B j); firstorder discriminate. split. intros Hij [HaB Hw]. apply IHw in Hw as []. intro Aw. apply Hij. now split. intros Hw [Ha Aw]. split. apply H, Ha. eapply IHw. 2: eassumption. tauto. Qed. End diff. (** * Decidability of DFA language emptiness We proceed as follows: 1. we forget all transition labels to get a directed graph whose nodes have an accepting status. 2. we compute the reflexive and transitive closure of this graph 3. we deduce the set of all states reachable from the initial state. 4. the DFA is empty iff this set does not contain any accepting states. All these computations are straightforward, except for 2, for which we exploit Kleene star on Boolean matrices. The resulting algorithm is not efficient at all. We don't care because this is not the one we execute in the end: this one is just used to establish KA completeness. *) Section empty_dec. Variables A: t. (** erased transition graph, represented as a Boolean matrix *) Definition step: bmx (n A) (n A) := fun i j => \sup_(a\in vars A) eqb_ord (M A i a) j. (** reflexive transitive closure of this graph *) Definition steps := (@str bmx _ step). Variable i: ord (n A). (** basic properties of this closed graph *) Lemma steps_refl: steps i i. Proof. apply bmx_str_clot. constructor. Qed. Lemma steps_snoc: forall j a, steps i j -> In a (vars A) -> steps i (M A j a). Proof. setoid_rewrite bmx_str_clot. intros. eapply clot_snoc. eassumption. setoid_rewrite is_true_sup. eexists. split. eassumption. apply eqb_refl. Qed. (** state reached from [i] by following a word [w] in the DFA *) Fixpoint Ms i w := match w with nil => i | cons a w => Ms (M A i a) w end. (** each unlabelled path in the erased graph corresponds to a labelled path (word) in the DFA *) Lemma steps_least: forall j, steps i j -> exists w, w ≦ vars A /\ j = Ms i w. Proof. intros j H. apply bmx_str_clot in H. induction H as [i|i j k Hij _ [w [Hw ->]]]. exists nil. split. lattice. reflexivity. setoid_rewrite is_true_sup in Hij. destruct Hij as [a [Ha Hij]]. exists (a::w). split. intros b [<-|Hb]. assumption. now apply Hw. revert Hij. case eqb_ord_spec. 2: discriminate. now intros <-. Qed. (** can we reach an accepting state from [i] *) Definition empty := \inf_(j<_) (steps i j <<< !v A j). (* TODO: les deux lemmes suivants sont certainement simplifiables en prouvant directement l'équivalence *) (** if not, all states reachable from [i] map to the empty language *) Lemma empty_lang1 j: steps i j -> empty -> lang A j ≦ bot. Proof. intros Hj He. setoid_rewrite is_true_inf in He. setoid_rewrite le_bool_spec in He. pose proof (fun i => He i (ordinal.in_seq _)) as H. clear He. intro w. revert j Hj. induction w as [|a w IH]; simpl lang; intros j Hj. apply (H j), negb_spec in Hj. rewrite Hj. discriminate. intros [Ha Hj']. apply IH in Hj' as []. now apply steps_snoc. Qed. (** conversely, if [i] maps to them empty language, then there is no reachable accepting state *) Lemma empty_lang2: lang A i ≦ bot -> empty. Proof. intro H. setoid_rewrite is_true_inf. intros j _. rewrite le_bool_spec. intro Hj. apply steps_least in Hj as [w [Hw ->]]. generalize i (H w) Hw. clear. induction w; intros i Hi Hw. simpl in *. destruct (v A i). now elim Hi. reflexivity. apply IHw. intro H. elim Hi. split. apply Hw. now left. assumption. intros ? ?. apply Hw. now right. Qed. (** decidability of language emptiness follows *) Theorem empty_dec: {lang A i ≦ bot} + {~ (lang A i ≦ bot)}. Proof. case_eq empty; [left|right]. apply (empty_lang1 _ steps_refl H). intro E. apply empty_lang2 in E. rewrite H in E. discriminate. Qed. End empty_dec. (** * Decidability of DFA language inclusion *) Corollary lang_incl_dec A B: vars A ≦ vars B -> forall i j, {lang A i ≦ lang B j} + {~(lang A i ≦ lang B j)}. Proof. intros. eapply sumbool_iff. symmetry. now apply diff_spec. apply empty_dec. Qed. relation-algebra-v.1.7.9/theories/factors.v000066400000000000000000000052001440504774100206610ustar00rootroot00000000000000(** * factors: additional properties of left and right factors *) Require Import kleene. Set Implicit Arguments. Lemma ldv_dotx `{laws} `{DIV ≪ l} n m p q (x: X n m) (y: X m p) (z: X n q): x⋅y -o z ≡ y -o x -o z. Proof. apply from_below. intro t. now rewrite 3ldv_spec, dotA. Qed. Lemma ldv_xdot `{laws} `{DIV ≪ l} n m p (x: X n m) (y: X m p): y ≦ x -o (x⋅y). Proof. now rewrite ldv_spec. Qed. Lemma ldv_1x `{laws} `{DIV ≪ l} n m (x: X n m): 1 -o x ≡ x. Proof. apply from_below. intro y. now rewrite ldv_spec, dot1x. Qed. Lemma ldv_0x `{laws} `{DIV+BOT+TOP ≪ l} n m p (x: X n m): 0 -o x ≡ top' p m. Proof. apply from_below. intro y. rewrite ldv_spec, dot0x. split; intros _; lattice. Qed. Lemma ldv_xt `{laws} `{DIV+TOP ≪ l} n m p (x: X n m): x -o top ≡ top' m p. Proof. apply from_below. intro y. rewrite ldv_spec. split; intros _; lattice. Qed. Lemma str_ldv `{laws} `{STR+DIV ≪ l} n m (x: X n m): (x -o x)^* ≡ x -o x. Proof. apply antisym. apply str_ldv_. apply str_ext. Qed. Lemma ldv_rdv `{laws} `{DIV ≪ l} n m p q (x: X n m) y (z: X p q): x -o (y o- z) ≡ (x -o y) o- z. Proof. apply from_below. intro. now rewrite ldv_spec, 2rdv_spec, ldv_spec, dotA. Qed. Lemma ldv_unfold `{laws} `{BL+DIV+CNV ≪ l} n m p (x: X n m) (y: X n p): x -o y ≡ !(x° ⋅ !y). Proof. apply from_below. intro. now rewrite ldv_spec, neg_leq_iff', <-Schroeder_l. Qed. (** dual properties for right factors *) Lemma rdv_cancel `{laws} `{DIV ≪ l} n m p (x: X m n) (y: X p n): (y o- x)⋅x ≦ y. Proof. dual @ldv_cancel. Qed. Lemma rdv_dotx `{laws} `{DIV ≪ l} n m p q (x: X m n) (y: X p m) (z: X q n): z o- y⋅x ≡ z o- x o- y. Proof. dual @ldv_dotx. Qed. Lemma rdv_xdot `{laws} `{DIV ≪ l} n m p (x: X m n) (y: X p m): y ≦ y⋅x o- x. Proof. dual @ldv_xdot. Qed. Lemma leq_rdv `{laws} `{DIV ≪ l} n m (x y: X m n): x ≦ y <-> 1 ≦ y o- x. Proof. dual @leq_ldv. Qed. Lemma rdv_xx `{laws} `{DIV ≪ l} n m (x: X m n): 1 ≦ x o- x. Proof. dual @ldv_xx. Qed. Lemma rdv_1x `{laws} `{DIV ≪ l} n m (x: X m n): x o- 1 ≡ x. Proof. dual @ldv_1x. Qed. Lemma rdv_0x `{laws} `{DIV+BOT+TOP ≪ l} n m p (x: X m n): x o- 0 ≡ top' m p. Proof. dual @ldv_0x. Qed. Lemma rdv_xt `{laws} `{DIV+TOP ≪ l} n m p (x: X m n): top o- x ≡ top' p m. Proof. dual @ldv_xt. Qed. Lemma rdv_trans `{laws} `{DIV ≪ l} n m p q (x: X m n) (y: X p n) (z: X q n): (z o- y)⋅(y o- x) ≦ z o- x. Proof. dual @ldv_trans. Qed. Lemma str_rdv `{laws} `{STR+DIV ≪ l} n m (x: X m n): (x o- x)^* ≡ x o- x. Proof. dual @str_ldv. Qed. Lemma rdv_unfold `{laws} `{BL+DIV+CNV ≪ l} n m p (x: X m n) (y: X p n): y o- x ≡ !(!y⋅x°). Proof. dual @ldv_unfold. Qed. relation-algebra-v.1.7.9/theories/fhrel.v000066400000000000000000000345771440504774100203430ustar00rootroot00000000000000Require Import Setoid Morphisms. Require Import kat normalisation rewriting kat_tac comparisons rel relalg boolean. From mathcomp Require Import all_ssreflect. Set Implicit Arguments. Unset Printing Implicit Defensive. Unset Strict Implicit. Set Bullet Behavior "Strict Subproofs". (** * fhrel: The model of finite heterogeneous relations *) (** Tactic for eliminating boolean logical connectives. Usually followed by [firstoder]. This needs explicit calls to setoid_rewrite do work under binders *) #[export] Instance: Proper (iff ==> iff ==> iff ==> iff) and3. firstorder. Qed. #[export] Instance: Proper (iff ==> iff ==> iff ==> iff ==> iff) and4. firstorder. Qed. #[export] Instance: Proper (iff ==> iff ==> iff ==> iff ==> iff ==> iff) and5. firstorder. Qed. #[export] Instance: Proper (iff ==> iff ==> iff ==> iff) or3. firstorder. Qed. #[export] Instance: Proper (iff ==> iff ==> iff ==> iff ==> iff) or4. firstorder. Qed. Ltac to_prop := repeat first [ setoid_rewrite <- (rwP andP) | setoid_rewrite <- (rwP orP) | setoid_rewrite <- (rwP and3P) | setoid_rewrite <- (rwP existsP) | setoid_rewrite <- (rwP forallP) | setoid_rewrite <- (rwP eqP) | (* last because implyP also matches [exists _,_] *) setoid_rewrite <- (rwP implyP)]. (** Use phantom types to trigger inference of finType structures *) Section FHRelType. Variables A B : finType. Definition fhrel_type := A -> B -> bool. Definition fhrel_of of phant A & phant B := fhrel_type. End FHRelType. Notation "{ 'fhrel' A & B }" := (fhrel_of (Phant A) (Phant B)) (at level 0, format "{ 'fhrel' A & B }") : type_scope. (** Heterogeneous binary relations [hrel X Y] (i.e., [X -> Y -> Prop]) are already declared as a model of relation algebra in the module [rel]. Thus, we only need to provide the model for finite and decidable relations between finite types [{fhrel A & B}] *) Section FHRel. Implicit Types A B C : finType. (** ** Lattice Operations *) (** The relations [e1 ≦ e2] and [e1 ≡ e2] and the lattice operations are not declared but derived (up to conversion) via the powerset construction on the lattice of booleans << Section LatticeOps. Variables (A B : finType) (e1 e2 : {fhrel A & B}). Implicit Types (x : A) (y : B). Definition fhrel_weq := e1 =2 e2. Definition fhrel_leq := forall x y, e1 x y -> e2 x y. Definition fhrel_cup := fun x y => e1 x y || e2 x y. Definition fhrel_cap := fun x y => e1 x y && e2 x y. Definition fhrel_neg := fun x y => ~~ (e1 x y). Definition fhrel_top := fun x y => true. Definition fhrel_bot := fun x y => false. End LatticeOps. >> *) Canonical Structure fhrel_lattice_ops A B := lattice.mk_ops {fhrel A & B} lattice.leq weq cup cap neg bot top. Arguments lattice.leq : simpl never. Arguments lattice.weq : simpl never. (** Obtain the lattice laws using the standard powerset construction *) #[export] Instance fhrel_lattice_laws A B: lattice.laws (BL+STR+CNV+DIV) (fhrel_lattice_ops A B) := pw_laws _. Lemma fhrel_leq_dec A B (e1 e2 : {fhrel A & B}) : decidable (e1 ≦ e2). Proof. apply: (@decP _ [forall x, forall y, e1 x y ==> e2 x y]). apply: (equivP idP); to_prop; reflexivity. Qed. Lemma fhrel_weq_dec A B (e1 e2 : {fhrel A & B}) : decidable (e1 ≡ e2). Proof. apply: (@decP _ [forall x, forall y, e1 x y == e2 x y]). apply: (equivP idP); to_prop; reflexivity. Qed. (** Enable rewriting of [lattice.leq] and [weq] using Ssreflect's rewrite tactic *) #[export] Instance leq_rewrite_relation ops : RewriteRelation (@lattice.leq ops). Qed. #[export] Instance weq_rewrite_relation ops : RewriteRelation (@weq ops). Qed. (** ** Monoid Operations We implement the monoid operations using Ssreflect's boolean quantifiers and the transitive closure operation provided by the path library *) Section MonoidOps. Variables (A B C : finType). Definition fhrel_dot (e1 : {fhrel A & B}) (e2 : {fhrel B & C}) : {fhrel A & C} := fun x y => [exists z, e1 x z && e2 z y]. Definition fhrel_cnv (e : {fhrel A & B}) : {fhrel B & A} := fun x y => e y x. Definition fhrel_str (e : {fhrel A & A}) : {fhrel A & A} := connect e. Definition fhrel_one : {fhrel A & A} := @eq_op A. Definition fhrel_rdv (e1 : {fhrel B & A}) (e2 : {fhrel C & A}) : {fhrel C & B} := fun j i => [forall k, e1 i k ==> e2 j k]. Definition fhrel_ldv (e1 : {fhrel A & B}) (e2 : {fhrel A & C}) : {fhrel B & C} := fun i j => [forall k, e1 k i ==> e2 k j]. Definition fhrel_inj (p : pred A) := fun x y => (x == y) && p x. End MonoidOps. Definition fhrel_itr A (e : {fhrel A & A}) : {fhrel A & A} := fhrel_dot e (connect e). Canonical fhrel_monoid_ops := monoid.mk_ops finType fhrel_lattice_ops fhrel_dot fhrel_one fhrel_itr fhrel_str fhrel_cnv fhrel_ldv fhrel_rdv. (** Ensure that the [fhrel_*] definitions simplify, given enough arguments. *) Arguments fhrel_dot {_ _ _} _ _ /. Arguments fhrel_cnv {_ _} _ _ /. Arguments fhrel_one {_} _ _ /. Arguments fhrel_str {_} _ _ /. Arguments fhrel_ldv {_} _ _ /. Arguments fhrel_rdv {_} _ _ /. Arguments fhrel_inj {_} _ _ _ /. Lemma connect_iter (A : ob fhrel_monoid_ops) (e : fhrel_monoid_ops A A) (x y : A): connect e x y <-> (exists u : nat, rel.iter A e u x y). Proof. split. - case/connectP => p pth_p lst_p. exists (size p). elim: p x pth_p lst_p => /= [x _ -> // | z p IHp x /andP [xz pth_p] lst_p]. exists z => //. exact: IHp. - case => n. elim: n x => /= [x <- // |n IHn x [z xz H]]. exact: connect_trans (connect1 xz) (IHn _ _). Qed. (** We obtain the monoid laws using a faithful functor to the hrel model *) Definition hrel_of (A B : finType) (e : {fhrel A & B}) : hrel A B := fun x y => e x y. Ltac hrel_prop := do ! move => ?; rewrite /hrel_of /=; to_prop; by firstorder. Lemma hrel_of_morphism (A B : finType) : morphism (BDL+STR+CNV+DIV) (@hrel_of A B). Proof. split; try done; try hrel_prop. move => e1 e2 H x y. apply/eq_bool_iff. exact: H. Qed. Lemma hrel_of_functor : functor (BDL+STR+CNV+DIV) hrel_of. Proof. apply (@Build_functor (BDL+STR+CNV+DIV) fhrel_monoid_ops hrel_monoid_ops id hrel_of). all: try done. all: try hrel_prop. - apply: hrel_of_morphism. - move => _ A e x y. rewrite /hrel_of/= /fhrel_itr /hrel_itr /=. to_prop. setoid_rewrite connect_iter. by firstorder. - move => _ A e x y. exact: connect_iter. Qed. Lemma fhrel_monoid_laws_BDL: monoid.laws (BDL+STR+CNV+DIV) fhrel_monoid_ops. Proof. eapply (laws_of_faithful_functor hrel_of_functor) => //. move => A B e1 e2 H x y. apply/eq_bool_iff. exact: H. Qed. #[export] Instance hrel_monoid_laws: monoid.laws (BL+STR+CNV+DIV) fhrel_monoid_ops. Proof. case fhrel_monoid_laws_BDL => *. split; try assumption. exact: fhrel_lattice_laws. Qed. (** In ssreflect /= is used pervasively, so we block the simplification of relation operators. Unfold selectively by using [rewrite /dot/=] *) Arguments dot : simpl never. Arguments cnv : simpl never. Arguments str : simpl never. Arguments itr : simpl never. Arguments lattice.leq : simpl never. Arguments lattice.weq : simpl never. Arguments monoid.one : simpl never. Hint Unfold monoid.one : core. (** Lemmas to eliminate [1 n m] *) Lemma hrel_oneE (n : Set) a b : (1 : hrel n n) a b <-> a = b. Proof. reflexivity. Qed. Lemma fhrel_oneE A a b : (1 : {fhrel A & A}) a b = (a == b). Proof. reflexivity. Qed. Definition oneE := (hrel_oneE,fhrel_oneE). (** The following is required since [top' x y] tries to infer an [hrel] instance and then fails with a universe inconsistency as finite types to not necessarily live in [Set] *) Definition ftop_def (A B : finType) of phant A & phant B := (@top (@mor fhrel_monoid_ops A B)). Notation ftop A B := (ftop_def (Phant A) (Phant B)). Definition fzero_def (A B : finType) of phant A & phant B := (@bot (@mor fhrel_monoid_ops A B)). Notation fzero A B := (fzero_def (Phant A) (Phant B)). Definition fone_def (A : finType) of phant A := (@one fhrel_monoid_ops A). Notation fone A := (fone_def (Phant A)). Arguments ftop_def A B /. Arguments fzero_def A B /. Arguments fone_def A /. (** ** KAT Operations *) (** "decidable" sets or predicates: Boolean functions *) Definition fdset: ob fhrel_monoid_ops -> lattice.ops := pw_ops bool_lattice_ops. Canonical Structure fhrel_kat_ops := kat.mk_ops fhrel_monoid_ops fdset (@fhrel_inj). Global Instance fhrel_kat_laws: kat.laws fhrel_kat_ops. Proof. split. - by eapply lower_laws. - move => A. by eapply lower_lattice_laws. - move => A. have H : Proper (lattice.leq ==> lattice.leq) (@fhrel_inj A). { move => e1 e2 H x y /=. by case: (_ == _) => //=. } split => //=. + move => x y. rewrite !weq_spec. by intuition. + move => _ f g x y /=. by case: (_ == _); case (f x); case (g x). + move => _ x y /=. by rewrite andbF. - move => A x y /=. by rewrite andbT. - move => A p q x y /=. rewrite /dot/=. apply/eq_bool_iff. to_prop. by firstorder;subst. Qed. (** ** Cardinality For finite relations we obtain cardinality operation by coercing heterogeneous relations to prediactes on pairs *) Definition fhrel_pred (aT rT: finType) (e : {fhrel aT & rT}) := [pred x : aT * rT | e x.1 x.2]. Canonical fhrelPredType (aT rT: finType) := @PredType (aT * rT) ({fhrel aT & rT}) (@fhrel_pred _ _). Coercion fhrel_pred : fhrel_of >-> simpl_pred. (* TOTHINK: Instanciate the cardinality development with this? *) (* Definition fhrel_card (aT rT: finType) (e : {fhrel aT & rT}) := #|e|. *) Section CardinalityBase. Variables (aT rT : finType). Implicit Types (e : {fhrel aT & rT}). Lemma leq_card e e': e ≦ e' -> #|e| <= #|e'|. Proof. move => A. apply: subset_leq_card. apply/subsetP => x. exact: A. Qed. Lemma weq_card e e' : e ≡ e' -> #|e| = #|e'|. Proof. rewrite weq_spec (rwP eqP) eqn_leq => [[A B]]. by rewrite !leq_card. Qed. Lemma fhrel_card0 : #|fzero aT rT| = 0%N. Proof. exact: card0. Qed. Lemma fhrel_cardT : #|ftop aT rT| = #|aT| * #|rT|. Proof. exact: eq_card_prod. Qed. Lemma fhrel_card1 : #|fone aT| = #|aT|. Proof. rewrite -(on_card_preimset (f := (fun x => (x,x)))). - rewrite eq_cardT ?cardT //= => x. by rewrite !inE oneE eqxx. - exists fst => //= [[x y]]. by rewrite inE oneE /= => /eqP->. Qed. Lemma card_cnv e : #|e°| = #|e|. Proof. rewrite -(on_card_preimset (f := (fun x => (x.2,x.1)))). - apply: eq_card => [[x y]]. by rewrite !inE. - apply: onW_bij. apply: (Bijective (g := fun x => (x.2,x.1))); by case. Qed. Lemma card_cup e1 e2 : #|e1 + e2| = #|e1| + #|e2| - #|e1 ∩ e2|. Proof. by rewrite (rwP eqP) -cardUI -addnBA // subnn addn0. Qed. Lemma card_capL e1 e2 : #|e1 ∩ e2| <= #|e1|. Proof. apply: leq_card. by lattice. Qed. Lemma card_capR e1 e2 : #|e1 ∩ e2| <= #|e2|. Proof. apply: leq_card. by lattice. Qed. Definition dom e := [pred x | [exists y, e x y]]. Definition ran e := [pred y | [exists x, e x y]]. Lemma card_dom e : #|dom e| = #|e ⋅ ftop rT unit|. Proof. rewrite -[in RHS](on_card_preimset (f := (fun x : aT => (x,tt)))). - apply: eq_card => x. rewrite !inE /dot /=. by apply/eq_bool_iff;to_prop; firstorder. - apply: onW_bij. by apply: (Bijective (g := fst)) => [|[? []]]. Qed. End CardinalityBase. Lemma ran_dom {A B} {e : {fhrel A & B}} : ran e =i dom e°. Proof. move => x. by rewrite !inE. Qed. Lemma card_ran A B (e : {fhrel A & B}) : #|ran e| = #|ftop unit _⋅e|. Proof. rewrite (eq_card ran_dom) card_dom -card_cnv. apply: weq_card. rewrite /ftop_def. (* Why does ra. fail here? -- se below *) by rewrite cnvdot cnvtop cnv_invol. Qed. (* TOTHINK: This is actually a theorem of hrel, but there is no easy way to transfer. The same is true for [fhrel_injective] and [fhrel_univalent] *) Lemma fhrel_surjective A B (e : {fhrel A & B}) : is_surjective e <-> forall y, exists x, e x y. Proof. split => [H y|H y ?]. - case/exists_inP : (H y y (eqxx _)) => x. by exists x. - move/eqP<-. apply/exists_inP. case: (H y) => x exy. by exists x. Qed. Lemma surjective_card A B (e : {fhrel A & B}) : is_surjective e -> #|B| <= #|e|. Proof. move/fhrel_surjective => E. pose f y := (xchoose (E y),y). have inj_f : injective f by move => x y []. rewrite -(card_codom inj_f). apply: subset_leq_card. apply/subsetP => [[x y]] /codomP => [[y'] [-> <-]]. rewrite inE /=. exact: (xchooseP (E y)). Qed. Lemma fhrel_injective A B (e : {fhrel A & B}) : is_injective e <-> forall x x' y, e x y -> e x' y -> x = x'. Proof. rewrite /is_injective; split => [H x x' y E1 E2|H x x' E]. - apply/eqP. apply: (H x x'). by apply/exists_inP;exists y. - apply/eqP. case/exists_inP : E. exact: H. Qed. Lemma injective_card A B (e : {fhrel A & B}) : is_injective e -> #|e| <= #|B|. Proof. move/fhrel_injective => inj_e. rewrite -(card_in_imset (f := @snd A B)) ?max_card //. move => [x y] [x' y']. rewrite !inE /= => E1 E2 ?; subst. by rewrite (inj_e _ _ _ E1 E2). Qed. Lemma fhrel_univalent A B (e : {fhrel A & B}) : is_univalent e <-> forall x y y', e x y -> e x y' -> y = y'. Proof. split => [/@injective_cnv/fhrel_injective|H]. - rewrite /cnv/= => H ? ? ?; exact: H. - rewrite /is_univalent. cnv_switch. rewrite cnv1 cnvdot. rewrite -/(is_injective _) fhrel_injective => ? ? ?. exact: H. Qed. Lemma univalent_card A B (e : {fhrel A & B}) : is_univalent e -> #|e| <= #|A|. Proof. move => ?. rewrite -card_cnv. exact: injective_card. Qed. Lemma total_card A B (e : {fhrel A & B}) : is_total e -> #|A| <= #|e|. Proof. move => ?. rewrite -card_cnv. exact: surjective_card. Qed. End FHRel. Notation ftop A B := (ftop_def (Phant A) (Phant B)). Notation fzero A B := (fzero_def (Phant A) (Phant B)). Notation fone A := (fone_def (Phant A)). Ltac fold_fhrel := ra_fold fhrel_monoid_ops. Tactic Notation "fold_fhrel" "in" hyp_list(H) := ra_fold fhrel_monoid_ops in H. Tactic Notation "fold_fhrel" "in" "*" := ra_fold fhrel_monoid_ops in *. (** Note that, [subrel e f] (which is convertible to [e ≦ f] if [e] and [f] are homogeneous relations over some finite type) and [e =2 f] (which is convertible to [e ≡ f] even in the heterogeneous case) are used pervasively in MathComp. Consequently, [fold_fhrel] can be used to convert their statements into relation algebra statements. See below for an examle: *) (** This is [connect_sub] *) Goal forall (T : finType) (e e' : rel T), subrel e (connect e') -> subrel (connect e) (connect e'). move => T e e'. fold_fhrel. move => H. rewrite H. ka. Abort. (** This is [connect_eq] *) Goal forall (T : finType) (e e' : rel T), e =2 e' -> connect e =2 connect e'. move => T e e'. fold_fhrel. move => H. by rewrite H. Abort. relation-algebra-v.1.7.9/theories/glang.v000066400000000000000000000100261440504774100203120ustar00rootroot00000000000000(** * glang: the KAT model of guarded string languages *) (** The model of guarded string languages is the model of traces, when states are the atoms of a Boolean lattice, we prove here that this it is a model of Kleene algebra with tests (KAT), where the Boolean subalgebra is just the free one: the set of Boolean expressions. Like for traces, we provide both untyped and typed models. *) Require Export traces. Require Import kat lsyntax ordinal comparisons boolean. Set Implicit Arguments. Section s. (** * Untyped model *) (** We consider the free Boolean lattice over a set of [pred] predicates, whose atoms are just functions [a: ord pred -> bool] assigning a truth value to each variable. *) Variable pred: nat. Notation Sigma := positive. (** to avoid extensionality problems, we call "atom" an element of [ord (pow2 pred)], relying on the bijection between [ord pred -> bool] and that set when needed *) Notation Atom := (ord (pow2 pred)). (** Boolean expressions over [pred] variables are injected into traces as follows: take all traces reduced to a single atom (i.e., state) such that the Boolean expression evaluates to [true] under the corresponding assignation of variables *) Definition glang_inj (n: traces_unit) (x: expr_ops (ord pred) BL): traces Atom := fun w => match w with | tnil a => is_true (eval (set.mem a) x) | _ => False end. (** packing this injection together with the Kleene algebra of traces and the Boolean algebra of expressions *) Canonical Structure glang_kat_ops := kat.mk_ops _ _ glang_inj. (** This model satisfies KAT laws *) Global Instance glang_kat_laws: kat.laws glang_kat_ops. Proof. constructor. apply lower_laws. intro. apply expr_laws. assert (inj_leq: forall n, Proper (leq ==> leq) (@glang_inj n)). intros n e f H [a|]. 2: reflexivity. apply (fn_leq _ _ (H _ lower_lattice_laws _)). constructor; try discriminate. apply inj_leq. apply op_leq_weq_1. intros _ x y [a|]. 2: compute; tauto. simpl. setoid_rewrite Bool.orb_true_iff. reflexivity. intros _ [a|]. 2: reflexivity. simpl. intuition discriminate. intros ? [a|]. 2: reflexivity. simpl. now intuition. intros ? x y [a|]. simpl. setoid_rewrite Bool.andb_true_iff. split. intros (Hx&Hy). repeat exists (tnil a); try split; trivial. constructor. intros [[b|] Hu [[c|] Hv H]]; try elim Hu; try elim Hv. inversion H. intuition congruence. intros. simpl. split. intros []. intros [[b|] Hu [[c|] Hv H]]; try elim Hu; try elim Hv. inversion H. Qed. (** * Typed model *) (** the typed model is obtained in a straighforward way from the typed traces model: Boolean expressions can be injected as in the untyped case since there are no typing constraints on the generated traces (they are reduced to a single state). *) Variables src tgt: Sigma -> positive. Program Definition tglang_inj n (x: expr_ops (ord pred) BL): ttraces Atom src tgt n n := glang_inj traces_tt x. Next Obligation. intros [a|???] []. constructor. Qed. Canonical Structure tglang_kat_ops := kat.mk_ops _ _ tglang_inj. (* TODO: comment factoriser les deux preuves? *) Global Instance tglang_kat_laws: kat.laws tglang_kat_ops. Proof. constructor. apply lower_laws. intro. apply expr_laws. assert (inj_leq: forall n, Proper (leq ==> leq) (@tglang_inj n)). intros n e f H [a|]. 2: reflexivity. apply (fn_leq _ _ (H _ lower_lattice_laws _)). constructor; try discriminate. apply inj_leq. apply op_leq_weq_1. intros _ x y [a|]. 2: compute; tauto. simpl. setoid_rewrite Bool.orb_true_iff. tauto. intros _ [a|]. 2: reflexivity. simpl. intuition discriminate. intros ? [a|]. 2: reflexivity. simpl. now intuition. intros ? x y [a|]. simpl. setoid_rewrite Bool.andb_true_iff. split. intros (Hx&Hy). repeat exists (tnil a); try split; trivial. constructor. intros [[b|] Hu [[c|] Hv H]]; try elim Hu; try elim Hv. inversion H. intuition congruence. intros. simpl. split. intros []. intros [[b|] Hu [[c|] Hv H]]; try elim Hu; try elim Hv. inversion H. Qed. End s. relation-algebra-v.1.7.9/theories/gregex.v000066400000000000000000000175601440504774100205150ustar00rootroot00000000000000(** * gregex: generalised typed regular expressions, for KAT *) (** we define a typed syntax for KAT expressions: - typed because we have to prove KAT completeness at the typed level - generalised w.r.t. regular expressions since it has to embeds Boolean expressions, for tests *) Require Import lsyntax glang kat boolean atoms sups. Set Implicit Arguments. (** * Generalised regular expressions *) Section s. (** [I] is the set of objects of the category (or types, or indices) *) Notation I := positive. (** [Sigma] is the set of (Kleene) variables, those interpreted as relations, for instance *) Notation Sigma := positive. (** [pred] is the number of predicate variables (elementary tests), so that tests are just expressions with variables in [ord pred] *) Variable pred: nat. (** [src] and [tgt] assign a type to each Kleene variable *) Variables src tgt: Sigma -> I. (** Note that we do not type elementary tests: we shall actually prove a specific untyping theorem about KAT which makes this possible *) (** generalised regular expressions are just typed regular expressions, with an additional constructor, [g_prd], for embedding Boolean expressions *) Inductive gregex: I -> I -> Set := | g_zer: forall {n m}, gregex n m | g_prd: forall {n} (a: expr (ord pred)), gregex n n | g_var: forall (i: Sigma), gregex (src i) (tgt i) | g_pls: forall n m (e f: gregex n m), gregex n m | g_dot: forall n m p (e: gregex n m) (f: gregex m p), gregex n p | g_itr: forall n (e: gregex n n), gregex n n. (** [1] is derived, as the injection of the [top] expression *) Definition g_one {n} := @g_prd n e_top. (** Also note that [g_itr] is chosen as primitive, rather than [g_str] *) Definition g_str n (e: gregex n n) := g_pls g_one (g_itr e). (** * Interpretation into an arbitrary Kleene algebra with tests *) Section e. (** to interpret an expressions, we need: - a KAT [X], - an interpretation [fo] of syntactic types ([I]) - a properly typed interpretation [fs] of each Kleene variable - an interpretation [fp] of each predicate variable into the tests of X, at each type n (consider for instance, the expression [p]⋅a*[q], with [a: X n m, p: tst n, q: tst m], which can be represented by the term [@g_prd 1 (e_var 1) ⋅ g_var 1 ⋅ @g_prd 2 (e_var 1)], with the environments [fo(1)=n], [fo(2)=m], [fs(1)=1], [fp(1)(1)=p], [fp(2)(1)=q]). *) Context {X: kat.ops}. Variable fo: I -> ob X. Variable fp: forall n, ord pred -> tst (fo n). Variable fs: forall i, X (fo (src i)) (fo (tgt i)). Fixpoint eval n m (e: gregex n m): X (fo n) (fo m) := match e with | g_zer => 0 | @g_prd n p => [lsyntax.eval (fp n) p] | g_pls e f => eval e + eval f | g_dot e f => eval e ⋅ eval f | g_itr e => (eval e)^+ | g_var i => fs i end. End e. (** * generalised regular expressions form a model of KAT *) (** (in)equalitiy on [gregex] is defined as a smallest fixed-point, impredicatevely *) Definition g_leq n m (x y: gregex n m) := forall X (L: kat.laws X) fo fp fa, @eval X fo fp fa n m x ≦ @eval X fo fp fa n m y. Definition g_weq n m (x y: gregex n m) := forall X (L: kat.laws X) fo fp fa, @eval X fo fp fa n m x ≡ @eval X fo fp fa n m y. (** packing all operations using canonical structures *) Canonical Structure gregex_lattice_ops n m := {| car := gregex n m; leq := @g_leq n m; weq := @g_weq n m; cup := @g_pls n m; bot := @g_zer n m; cap := assert_false (@g_pls n m); top := assert_false (@g_zer n m); neg := assert_false id |}. Canonical Structure gregex_monoid_ops := {| ob := I; mor := gregex_lattice_ops; dot := g_dot; one := @g_one; itr := g_itr; str := g_str; cnv n m := assert_false (fun _ => bot); ldv n m p := assert_false (fun _ _ => bot); rdv n m p := assert_false (fun _ _ => bot) |}. Canonical Structure gregex_kat_ops := kat.mk_ops gregex_monoid_ops (fun n => expr_ops _ BL) (@g_prd). (** lattice laws *) Global Instance gregex_lattice_laws n m: lattice.laws BKA (gregex_lattice_ops n m). Proof. constructor; try right; try discriminate. constructor. intros x X L fo fa fs. reflexivity. intros x y z H H' X L fo fa fs. transitivity (eval fo fa fs y); auto. intros x y. split. intro H. split; intros X L fo fa fs. now apply weq_leq, H. now apply weq_geq, H. intros [H H'] X L fo fa fs. apply antisym; auto. intros Hl x y z. split. intro H. split; intros X L fo fa fs; specialize (H X L fo fa fs); simpl in H; hlattice. intros [H H'] X L fo fa fs. simpl. apply cup_spec; auto. intros x X L fo fa fs. apply leq_bx. Qed. (** kleene algebra laws *) Global Instance gregex_monoid_laws: monoid.laws BKA gregex_monoid_ops. Proof. constructor; (try discriminate); repeat right; repeat intro; simpl. apply gregex_lattice_laws. apply dotA. rewrite inj_top. apply dot1x. rewrite inj_top. apply dotx1. apply dot_leq; auto. now rewrite dotplsx. now rewrite dotxpls. now rewrite dot0x. now rewrite dotx0. lattice. rewrite inj_top, <-str_itr. apply str_cons. rewrite inj_top, <-str_itr. apply str_ind_l. now refine (H0 _ _ _ _ _). rewrite inj_top, <-str_itr. apply str_ind_r. now refine (H0 _ _ _ _ _). rewrite inj_top, <-str_itr. apply itr_str_l. Qed. (** KAT laws *) Global Instance gregex_kat_laws: kat.laws gregex_kat_ops. Proof. constructor. apply gregex_monoid_laws. intro. apply lower_lattice_laws. constructor; try discriminate; repeat intro. apply inj_leq, H, tst_BL. apply inj_weq, H, tst_BL. apply inj_cup. apply inj_bot. reflexivity. repeat intro. apply inj_cap. Qed. (** additional properties of the injection ([g_prd]) *) Definition inj_cup := @inj_cup _ gregex_kat_laws. Definition inj_bot := @inj_bot _ gregex_kat_laws. Definition inj_cap := @inj_cap _ gregex_kat_laws. Definition inj_top := @inj_top _ gregex_kat_laws. Definition inj_weq := @inj_weq _ gregex_kat_laws. Definition inj_leq := @inj_leq _ gregex_kat_laws. Lemma inj_sup n I J (f: I -> expr_ BL): @g_prd n (sup f J) ≡ \sup_(i\in J) g_prd (f i). Proof. apply f_sup_weq. apply inj_bot. apply inj_cup. Qed. (** * Interpretation in the guarded strings model *) (** atoms are functions from predicate variables to [bool] *) Notation Atom := (ord (pow2 pred)). (** guarded string languages, typed according to the [src] and [tgt] functions *) Notation glang n m := (tglang_kat_ops pred src tgt n m). (** injection of atoms *) Notation g_atom n a := (@g_prd n (atom a)). (** (guarded string) language of a generalised regular expression. unlike for regular expressions, we define it inductively *) Definition lang: forall n m, gregex_kat_ops n m -> glang n m := eval id (fun _ => @e_var _) ttsingle. (** this interpretation function is a KA morphism, by definition *) Global Instance lang_leq n m: Proper (leq ==> leq) (@lang n m). Proof. intros e f H. apply H. apply tglang_kat_laws. Qed. Global Instance lang_weq n m: Proper (weq ==> weq) (@lang n m) := op_leq_weq_1. Lemma lang_0 n m: @lang n m 0 = 0. Proof. reflexivity. Qed. Lemma lang_1 n: @lang n n 1 ≡ 1. Proof. intros [|]; simpl; intuition. Qed. Lemma lang_pls n m (e f: gregex n m): lang (e+f) = lang e + lang f. Proof. reflexivity. Qed. Lemma lang_dot n m p (e: gregex n m) (f: gregex m p): lang (e⋅f) = lang e ⋅ lang f. Proof. reflexivity. Qed. Lemma lang_itr n (e: gregex n n): lang (e^+) = (lang e)^+. Proof. reflexivity. Qed. Lemma lang_sup n m I J (f: I -> _): @lang n m (sup f J) = \sup_(i\in J) lang (f i). Proof. apply f_sup_eq; now f_equal. Qed. (** languages of atoms *) Lemma lang_atom n a: lang (g_atom n a) ≡ tatom n a. Proof. (* TODO: voir si on peut faire mieux *) intros [b|]. 2: intros; compute; intuition discriminate. simpl. setoid_rewrite eval_var. split. intros H. now apply eval_atom in H as <-. intro E. injection E. clear E. intros <-. unfold atom. rewrite eval_inf. rewrite is_true_inf. intros i _. case_eq (set.mem a i); simpl; intros ->; reflexivity. Qed. End s. Arguments g_var {pred src tgt} i. relation-algebra-v.1.7.9/theories/ka_completeness.v000066400000000000000000000377441440504774100224160ustar00rootroot00000000000000(** * ka_completeness: completeness of Kleene algebra *) (** We mostly follow Dexter Kozen's proof: A completeness theorem for Kleene algebras and the algebra of regular events. Information and Computation, 110(2):366-390, May 1994. We proceed as follows: 1. convert regular expressions into matricial NFA with epsilon transitions, using a simple and compositional construction (Thompson). 2. remove epsilon transitions (algebraically, using Kleene star on matrices to compute their reflexive transitive closure) 3. determinise matricial NFA using the powerset construction (without performing the "accessible" subsets optimisation, for the sake of simplicity) 4. exploit decidability of DFA language inclusion to conclude (this is where our proof slightly differs from that of Kozen: he first minimises the DFA, and then compare them up to isomorphism) Again, the implicit underlying algorithm is not efficient ; we don't care since this is not the one we execute in the end. *) Require Import kleene sums matrix_ext lset boolean lang nfa regex bmx rmx normalisation. Require dfa. Set Implicit Arguments. Unset Printing Implicit Defensive. (** locally prevent [simpl] from unfolding relation algebra projections *) Local Arguments lattice.car {_}: simpl never. Local Arguments lattice.weq {_} _ _: simpl never. Local Arguments lattice.leq {_} _ _: simpl never. Local Arguments lattice.cup {_} _ _: simpl never. Local Arguments lattice.bot {_}: simpl never. Local Arguments monoid.mor {_} _ _: simpl never. Local Arguments monoid.one {_} _: simpl never. Local Arguments monoid.dot {_} _ _ _ _ _: simpl never. Local Arguments monoid.str {_} _ _: simpl never. (** * 1. constructing eNFA out of regular expressions (Thompson) *) (** Thompson construction is straightfoward once we have block matrix operations *) (** ** construction *) Module Thompson. (** empty automaton for [0] *) Definition zer := @mk 0 0 0 0. (** one state automaton for [1] *) Definition one := @mk 1 1 0 1. (** two states automata for variables *) Definition cst e := @mk (1+1) (row_mx 1 0) (blk_mx 1 (scal_mx e) 0 1) (col_mx 0 1). (** for [pls], we take the disjoint union of the two automata *) Definition pls e f := mk (row_mx e^u f^u) (blk_mx e^M 0 0 f^M) (col_mx e^v f^v). (** for [dot], we take the disjoint union, and we add epsilon transitions from accepting states of the first automaton to starting states of the second one *) Definition dot e f := mk (row_mx e^u 0) (blk_mx e^M (e^v⋅f^u) 0 f^M) (col_mx 0 f^v). (** strict iteration is obtained simply by adding epsilon transitions from accepting states to initial ones *) Definition itr e := mk e^u (e^M+e^v⋅e^u) e^v. (** Kleene star is derived from the other operations *) Definition str e := pls one (itr e). (** summing up all together, we get the final construction *) Fixpoint enfa (e: regex'): t := match e with | r_zer => zer | r_one => one | r_var _ => cst e | r_pls e f => pls (enfa e) (enfa f) | r_dot e f => dot (enfa e) (enfa f) | r_str e => str (enfa e) end. (** ** correctness *) (** algebraic correcteness of each construction *) Lemma eval_zer: eval zer ≡ 0. Proof. reflexivity. Qed. Lemma eval_cst e: eval (cst e) ≡ e. Proof. set (o:=eval (cst e)). vm_compute in o; subst o. ra_normalise. rewrite str1. ra. Qed. Lemma eval_one: eval one ≡ 1. Proof. set (o:=eval one). vm_compute in o; subst o. ra. Qed. Lemma eval_pls e f: eval (pls e f) ≡ eval e + eval f. Proof. destruct e as [n u M v]. destruct f as [m s N t]. change (mx_scal (row_mx u s ⋅ (blk_mx M 0 0 N)^* ⋅ col_mx v t) ≡ mx_scal (u ⋅ M ^* ⋅ v) + mx_scal (s ⋅ N ^* ⋅ t)). rewrite <-mx_scal_pls. apply mx_scal_weq. rewrite mx_str_diagonal. setoid_rewrite mx_dot_rowcol. rewrite dotplsx. rewrite <-2dotA, 2mx_dot_rowcol. ra. Qed. Lemma eval_dot e f: eval (dot e f) ≡ eval e ⋅ eval f. Proof. destruct e as [n u M v]. destruct f as [m s N t]. change (mx_scal (row_mx u 0 ⋅ (blk_mx M (v ⋅ s) 0 N)^* ⋅ col_mx 0 t) ≡ mx_scal (u ⋅ M ^* ⋅ v) ⋅ mx_scal (s ⋅ N ^* ⋅ t)). rewrite <-mx_scal_dot. apply mx_scal_weq. rewrite mx_str_trigonal. setoid_rewrite mx_dot_rowcol. rewrite dotplsx. rewrite <-dotA, mx_dot_rowcol. ra. Qed. Lemma eval_itr e: eval (itr e) ≡ (eval e)^+. Proof. rewrite itr_str_l. destruct e as [n u M v]. change (mx_scal (u ⋅ (M + v ⋅ u) ^* ⋅ v) ≡ mx_scal (u ⋅ M ^* ⋅ v) ⋅ (mx_scal (u ⋅ M ^* ⋅ v))^*). rewrite <-mx_scal_str, <-mx_scal_dot. apply mx_scal_weq. rewrite str_pls. rewrite <-3dotA, <-str_dot. ra. Qed. Lemma eval_str e: eval (str e) ≡ (eval e)^*. Proof. unfold str. now rewrite eval_pls, eval_one, eval_itr, str_itr. Qed. (** algebraic correcteness of the global construction *) Theorem correct e: eval (enfa e) ≡ e. Proof. induction e; simpl enfa. apply eval_zer. apply eval_one. rewrite eval_pls. now apply cup_weq. rewrite eval_dot. now apply dot_weq. rewrite eval_str. now apply str_weq. apply eval_cst. Qed. (** the produced matricial automaton actually is an NFA with epsilon transitions (i.e., the transition matrix is simple, and starting and accepting vectors are 01) *) Lemma is_enfa e: is_enfa (enfa e). Proof. Opaque Peano.plus. unfold is_enfa. induction e; cbn; intuition auto with mx_predicates. Transparent Peano.plus. Qed. End Thompson. (** * 2. removing espilon-transitions *) (** again, this is straightfoward once we have the Kleene algebra of matrices *) (** ** construction *) Definition nfa e := let e := Thompson.enfa e in let J := (epsilon_mx e^M)^* in mk e^u (J ⋅ pure_part_mx e^M) (J ⋅ e^v). (** ** correctness *) Theorem eval_nfa e: eval (nfa e) ≡ e. Proof. rewrite <- (Thompson.correct e) at 2. unfold nfa. change (mx_scal ((Thompson.enfa e)^u ⋅ ((epsilon_mx (Thompson.enfa e)^M)^* ⋅ pure_part_mx (Thompson.enfa e) ^M) ^* ⋅ ((epsilon_mx (Thompson.enfa e)^M)^* ⋅ (Thompson.enfa e) ^v)) ≡ eval (Thompson.enfa e)). set (f := Thompson.enfa e). set (J := epsilon_mx f^M). apply mx_scal_weq. rewrite (@expand_simple_mx _ _ f^M) at 2 by apply Thompson.is_enfa. rewrite str_pls. rewrite <-dotA, (dotA _ (J^*)). rewrite <-str_dot. apply dotA. Qed. (** the produced matricial automaton is actually a NFA (i.e., the transition matrix is pure, and starting and accepting vectors are 01) *) Lemma is_nfa_nfa e: is_nfa (nfa e). Proof. generalize (Thompson.is_enfa e). unfold nfa, is_nfa, is_enfa. cbn. intuition auto with mx_predicates. Qed. (** * 3. determinisation *) (** we use the standard powerset construction, and we exploit the bijection we defined in [ordinals] between the powerset on [ord n] ([ord n -> bool]) and [ord (pow2 n)]. This allows us to index states of the determinised automaton in a seemless way. *) Section det. (** ** construction *) (** we slightly generalise the construction, by specifying a finite subset of letters which have to be considered whether or not the NFA mention them. This allows us in the sequel to handle easily the case where the compared automata have different alphabets. *) (** list of letters to include *) Variable vars': list sigma. (** NFA to determinise *) Variable nfa: nfa.t. Hypothesis Hnfa: is_nfa nfa. Notation n := (n nfa). Notation u := nfa^u. Notation M := nfa^M. Notation v := nfa^v. (** alphabet of the generated DFA: those appearing in [M], plus the imposed ones *) Notation vars := (mx_vars M ⊔ vars'). (** (unlabelled) transition matrix of the NFA, restricted to [a] *) Let T_ a := epsilon_mx (deriv_mx a M). (** transition matrix of the NFA restricted to [a] *) Let M_ a: rmx _ _ := fun i j => var a ⋅ T_(a) i j. (** decoding matrix, establishing the link between the state of the determinised automaton (i.e., sets of states), with those of the starting one. We exploit for this the aforementioned bijection ; [X_(x,i)] holds iff [i\in x]. *) Let X: rmx (pow2 n) n := fun x i => ofbool (set.mem x i). (** determinised automaton: - the initial state is the set of intial states - the transition function is obtained by reading the transition matrix in a "parallel" way: along [a], the set [x] maps to the set [{j / i -a-> j for some i\in x}]. this computation is performed matricially. - the accepting states are those containing at least one accepting state again, this computation is done matricially, using the decoding matrix [X]. *) Definition det := dfa.mk (of_row u) (fun x a => of_row (to_row x ⋅ T_(a))) (fun j => epsilon ((X ⋅ v) j ord0)) vars. (** ** correctness *) (** the correctness is establish by using the bisimulation rule, to let the decoding matrix [X] go through [u ⋅ M^* ⋅ v]: denoting by [(u',M',v')] the determinised automaton, we easily deduce [u⋅M^*⋅v ≡ u'⋅M'^*⋅v'] from [u ≡u'⋅X], [X⋅M ≡ M'⋅X], and [X⋅v ≡ v']. *) Lemma det_uX: u ≡ det^u ⋅ X. Proof. intros i j. cbn. rewrite mx_dot_fun, dot1x. symmetry. apply mem_of_row, Hnfa. Qed. Lemma M_sum: M ≡ \sum_(a\in vars) M_(a). Proof. intros i j. simpl. rewrite (@expand' (M i j) (mx_vars M ++vars')). 2: apply leq_xcup; left; apply mx_vars_full. rewrite epsilon_pure by apply Hnfa. setoid_rewrite cupbx. setoid_rewrite mx_sup. apply sup_weq. 2: reflexivity. intro a. now rewrite <-epsilon_deriv_pure by apply Hnfa. Qed. Lemma det_MX: X ⋅ M^* ≡ det^M^* ⋅ X. Proof. apply str_move. rewrite M_sum. cbn. rewrite dotxsum, dotsumx. apply sup_weq. 2: reflexivity. intros a x j. rewrite mx_dot_fun. unfold X. rewrite (mem_of_row ord0) by (unfold T_; auto with mx_predicates). unfold weq. setoid_rewrite dotxsum. apply sup_weq. 2: reflexivity. intro j'. unfold M_, T_, epsilon_mx, mx_map. now rewrite 2dotA, dot_ofboolx. Qed. Lemma det_Xv: X⋅v ≡ det^v. Proof. intros x j. setoid_rewrite ord0_unique. apply expand_01. apply is_01_mx_dot. intros ? ?. apply is_01_ofbool. apply Hnfa. Qed. (** algebraic correctness of determinisation *) Theorem eval_det: eval det ≡ eval nfa. Proof. apply mx_scal_weq. rewrite det_uX. rewrite <-(dotA _ X), det_MX. rewrite <-(dotA _ _ v), <-(dotA _ X), det_Xv. now rewrite dotA. Qed. End det. (** * Kleene theorem as a corollary *) (** summing up the above three constructions, we can build DFA out of a regular expression *) Definition dfa vars e := det vars (nfa e). Corollary eval_dfa vars e: nfa.eval (dfa vars e) ≡ e. Proof. setoid_rewrite eval_det. apply eval_nfa. apply is_nfa_nfa. Qed. (** since the Kleene algebra of matrices allows us to compute a regular expression out of any finite automaton, we obtain Kleene theorem as a consequence: ``the languages recognised by a regular expression are those recognised by a (deterministic) finite automaton''. *) Theorem Kleene: forall l: lang' sigma, (exists e: regex, l ≡ regex.lang e) <-> (exists A: dfa.t, l ≡ dfa.lang A (dfa.u A)). Proof. intro l. setoid_rewrite <-nfa.dfa.eval_lang. setoid_rewrite <-dfa.reroot_id. split. intros [e He]. exists (dfa [] e). rewrite eval_dfa. assumption. intros [A HA]. exists (nfa.eval A). assumption. Qed. (** * 4. algebraic correctness of language inclusion checking for DFA *) (** rather than minimising DFA, and comparing them up to isomorphism, we take a shorter path here, by focusing on language inclusion rather than equivalence, and by avoiding minimisation and isomophisms Our goal is to prove that for any two DFA [A] and [B], [dfa.lang A ≦ dfa.lang B] entails [eval A ≦ eval B] (The premisse is a language inclusion, while the conclusion is a KA derivation.) The corresponding result on language equivalence and KA equality follows easily. We use the same kind of trick as in Kozen's minimisation proof, except that we consider the matrix of language inclusions rather than the matrix of the Mihyll-Nerode relation. *) Section E. Variables A B: dfa.t. (** thanks to the previous generalisation of the determinisation construction, we can assume w.l.o.g. that [vars A ≦ vars B]. This is required by our reduction of language inclusion to language emptiness ([dfa.lang_incl_dec]), but also to prove Lemma [R_M] below *) Hypothesis Hvars: dfa.vars A ≦ dfa.vars B. (** language inclusion matrix: [R_(j,i)] holds iff [dfa.lang A i ≦ dfa.lang B j] (Note that this matrix goes from [B] to [A].) *) Definition R: rmx (n B) (n A) := fun j i => ofbool (dfa.lang_incl_dec _ _ Hvars i j). (** the algebraic proof is quite similar to that of determinisation: we use the bisimulation rule for inclusions, with [R]: from [A^u ≦ B^u ⋅ R], [R ⋅ A^M ≦ B^M ⋅ R], and [R ⋅ A^v ≦ B^v], we deduce [A^u⋅A^M^*⋅A^v ≦ B^u⋅B^M^*⋅B^v] the second and third hypotheses always hold, while the first one holds only if the language of [A] is contained in that of [B]. *) Lemma R_v: R ⋅ A^v ≦ B^v. Proof. intros j i'. apply leq_supx. intros i _. unfold dot; simpl. clear i'. match goal with [|- ?P (_ ?x ?y) ?z ] => change (leq (x ⋅ y) z) end. setoid_rewrite <-andb_dot. apply ofbool_leq. unfold leq;simpl; unfold le_bool. setoid_rewrite Bool.andb_true_iff. setoid_rewrite is_true_sumbool. now intros [H H0%(H nil)]. Qed. Lemma R_M: R ⋅ A^M^* ≦ B^M^* ⋅ R. Proof. (* this proof also requires Hvars *) apply str_move_l. setoid_rewrite dotxsum. setoid_rewrite dotsumx. apply sup_leq'. exact Hvars. intros a Ha j i'. apply leq_supx. intros i _. rewrite mx_dot_fun. setoid_rewrite <-dot_ofboolx. unfold mx_fun. case eqb_spec. 2: ra. intros ->. apply dot_leq. 2: reflexivity. apply ofbool_leq. unfold leq;simpl; unfold le_bool. setoid_rewrite is_true_sumbool. intros H w Hw. apply (H (cons a w)). now split. Qed. Hypothesis HAB: regex.lang (eval A) ≦ regex.lang (eval B). Lemma R_u: A^u ≦ B^u ⋅ R. Proof. intros z i. cbn. rewrite mx_dot_fun, dot1x. unfold mx_fun. clear z. case eqb_ord_spec. 2: intro; apply leq_bx. intros ->. apply epsilon_reflexive. unfold R. rewrite sumbool_true. reflexivity. rewrite <-2dfa.eval_lang, <-2dfa.reroot_id. exact HAB. Qed. (** algebraic correctness of language inclusion test *) Theorem dfa_complete_leq: eval A ≦ eval B. Proof. apply mx_scal_leq. rewrite R_u. rewrite <-(dotA _ R), R_M. rewrite <-2dotA, R_v. now rewrite dotA. Qed. End E. (** * Completeness of KA *) (** summing up all together, we obtain completeness of KA for language inclusion *) Corollary ka_complete_leq: forall e f, lang e ≦ lang f -> e ≦ f. Proof. intros e f. rewrite <-(eval_dfa nil e), <-(eval_dfa (dfa.vars (dfa nil e)) f). apply dfa_complete_leq. lattice. Qed. (** and thus for language equivalence *) Corollary ka_complete_weq: forall e f, lang e ≡ lang f -> e ≡ f. Proof. intros e f. rewrite 2weq_spec. now split; apply ka_complete_leq. Qed. (** since languages form a model of KA, the converse directions trivially hold, so that we actually have equivalences *) Corollary ka_correct_complete_leq: forall e f, lang e ≦ lang f <-> e ≦ f. Proof. split. apply ka_complete_leq. apply lang_leq. Qed. Corollary ka_correct_complete_weq: forall e f, lang e ≡ lang f <-> e ≡ f. Proof. split. apply ka_complete_weq. apply lang_weq. Qed. (** * Decidability of KA *) (** as additional corollaries, we obtain decidability of (in)equations in Kleene algebra *) Corollary ka_leq_dec: forall e f: regex', {e ≦f} + {~(e ≦f)}. Proof. assert(G: forall e f: regex', let A := dfa [] e in let B := dfa (dfa.vars A) f in dfa.lang A (dfa.u A) ≦ dfa.lang B (dfa.u B) <-> e ≦f). intros e f A B. rewrite <-2nfa.dfa.eval_lang, <-2dfa.reroot_id. rewrite ka_correct_complete_leq. unfold A, B. now rewrite 2eval_dfa. intros. eapply sumbool_iff. apply G. apply dfa.lang_incl_dec. lattice. Qed. Corollary ka_weq_dec: forall e f: regex', {e ≡f} + {~(e ≡f)}. Proof. intros e f. destruct (ka_leq_dec e f). destruct (ka_leq_dec f e). left. now apply antisym. right. rewrite weq_spec. tauto. right. rewrite weq_spec. tauto. Qed. relation-algebra-v.1.7.9/theories/kat.v000066400000000000000000000061101440504774100200000ustar00rootroot00000000000000(** kat: Kleene algebra with tests *) (** We define here the class of Kleene algebra with tests, as a two sorted structure *) Require Export kleene. Set Implicit Arguments. (** * KAT operations *) (** A Kleene algebra with tests is composed of a Kleene algebra ([kar], i.e., [(X,dot,pls,one,zer,str)]), a Boolean algebra of tests ([tst], i.e., [(T,cap,cup,top,bot,neg)]), and an injection from tests to Kleene elements. Since we work with typed structures, the Kleene algebra is a category, and we actually have a family of Boolean algebras, for each square homset ([X n n]). *) Class ops := mk_ops { kar: monoid.ops; tst: ob kar -> lattice.ops; inj: forall {n}, tst n -> kar n n }. Coercion kar: ops >-> monoid.ops. (** we use [[p]] to denote the injection of the test [p] into the Kleene algebra *) Notation " [ p ] " := (inj p): ra_terms. (* -- failed attempts to declare [inj] as a coercion -- *) (* SubClass car' {X} n := @car (@tst X n). *) (* SubClass car'' {X} n := @car (@mor (@kar X) n n). *) (* Definition inj' X n: @car' X n -> @car'' X n := @inj X n. *) (* Coercion inj' : car' >-> car''. *) (* Print Coercion Paths car' car. *) (* Goal forall `{X: ops} n m (a: X n m) (p: tst n) (q: tst m), p⋅a*q ≡ a. *) (** * KAT laws *) (** The Kleene algebra should be a Kleene algebra (with bottom element), the Boolean algebras should be a Boolean lattice, and the injection should be a morphism of idempotent semirings, i.e, map [(leq,weq,cap,cup,top,bot)] into [(leq,weq,dot,pls,one,zer)] *) (* TOTHINK: voir si on laisse les deux instances :>, qui ne sont utiles que dans l'abstrait si les structures concrètes sont déclarées incrémentalement voir aussi s'il ne vaut pas mieux poser ces deux instances en paramètres phantomes. TODO: inliner [morphism]? TODO: relacher les contraintes sur les niveaux *) Class laws (X: ops) := { kar_BKA:> monoid.laws BKA kar; tst_BL:> forall n, lattice.laws BL (tst n); mor_inj: forall n, morphism BSL (@inj X n); inj_top: forall n, [top] ≡ one n; inj_cap: forall n (p q: tst n), [p ⊓ q] ≡ [p] ⋅ [q] }. (** * Basic properties of KAT *) Section s. Context `{L: laws}. Variable n: ob X. Global Instance inj_leq: Proper (leq ==> leq) (@inj X n). Proof. apply mor_inj. Qed. Global Instance inj_weq: Proper (weq ==> weq) (@inj X n). Proof. apply mor_inj. Qed. Lemma inj_bot: [bot] ≡ zer n n. Proof. now apply mor_inj. Qed. Lemma inj_cup (p q: tst n): [p ⊔ q] ≡ [p] + [q]. Proof. now apply mor_inj. Qed. Lemma str_inj (p: tst n): [p]^* ≡ 1. Proof. apply antisym. now rewrite leq_xt, inj_top, str1. apply str_refl. Qed. End s. (** * dual KAT, for duality reasoning *) Definition dual (X: ops) := {| kar := dual kar; tst := tst; inj := @inj X |}. Lemma dual_laws X (L: laws X): laws (dual X). Proof. constructor; try apply L. apply dual_laws, kar_BKA. intros. simpl in *. rewrite capC. apply inj_cap. Qed. Lemma dualize {P: ops -> Prop} (L: forall X, laws X -> P X) {X} {H: laws X}: P (dual X). Proof. eapply L. now apply dual_laws. Qed. Ltac dual x := apply (dualize x). relation-algebra-v.1.7.9/theories/kat_completeness.v000066400000000000000000001040111440504774100225600ustar00rootroot00000000000000(** * kat_completeness: completeness of Kleene algebra with tests *) (** We closely follow Dexter Kozen and Frederick Smith' proof: Kleene algebra with tests: Completeness and decidability. In Proc. CSL'96, vol. 1258 of LNCS, pages 244-259, 1996. Springer-Verlag. The only difference is that we do the proof directly for _typed_ KAT. (We cannot easily exploit an untyping theorem, like we do in the case of KA: although the untyping theorem holds for KAT, it actually follows from typed completeness - at least we did not find an alternative way of proving it) The proof can be summarised as follows: one exhibits a function [hat: gregex n m -> gregex n m] such that: 1. forall x, KAT |- hat x ≡ x 2. forall x, G(hat x) ≡ R(hat x), where . G(y) is the typed guarded strings interpretation of y . R(y) is the language interpretation of y, seen as a regular expression (the concrete coercions will be specified later on) From these properties, it follows that for all x,y: gregex n m, we have G(x) ≡ G(y) => G(hat x) ≡ G(hat y) (1, and G is a model) => R(hat x) ≡ R(hat x) (2) => KA |- hat x ≡ hat y (KA completeness) => KA |- hat x ≡ hat y : n -> m (untyping theorem for KA) => KAT|- hat x ≡ hat y : n -> m (KA theorems hold in KAT) => KAT|- x ≡ y : n -> m (1, and transitivity) (the converse is immediate, G being a model) *) (** enable notations for [positive] *) Require Import Coq.PArith.BinPosDef. Require Import denum lset sums normalisation. Require Import kat ka_completeness untyping. Require Import regex gregex lsyntax syntax lang glang boolean atoms. Set Implicit Arguments. (** abbreviations: [R] is the module about regular expressions, while [G] is the module about generalised regular expressions *) Module R := regex. Module G := gregex. Section KatCompleteness. Notation Sigma := positive. Variable pred: nat. Variables s t: Sigma -> positive. Notation gregex := (gregex_kat_ops pred s t). Notation Atom := (ord (pow2 pred)). Notation gword := (trace Atom). Notation glang := (tglang_kat_ops pred s t). Notation g_atom n a := (@g_prd pred s t n (@atom pred a)). Notation test := (lsyntax.expr (ord pred)). Local Open Scope list_scope. (** * 1. Definition of the [hat] function, and correctness *) (** ** externally guarded terms *) (** the hat function is defined by induction on the structure of its argument, but it actually produces formal sums of "externally guarded terms", defined by the following inductive *) Inductive guard: positive -> positive -> Type := | g_pred: forall {n} (a: Atom), guard n n | g_elem: forall n m (a: Atom) (e: gregex n m) (b: Atom), guard n m. Notation guards n m := (list (guard n m)). (** externally guarded terms, and sums of such terms can be converted back to [gregex] in the obvious way *) Definition geval n m (x: guard n m) := match x with | @g_pred n a => g_atom n a | @g_elem n m a e b => g_atom n a ⋅ e ⋅ g_atom m b end. Notation teval := (sup (@geval _ _)). (** ** inductive cases for the [hat] function *) (** *** predicates *) (** a predicate is mapped to the set of atoms under which it is satisfied *) Definition g_prd' n p: guards n n := map g_pred (filter (fun a => lsyntax.eval (set.mem a) p) (seq (pow2 pred))). Lemma teval_prd n (p: tst n): teval (g_prd' n p) ≡ inj p. Proof. unfold g_prd'. rewrite sup_map. unfold geval. setoid_rewrite <-inj_sup. apply inj_weq. symmetry. apply decomp_expr. Qed. (** *** unit *) (** accordingly, [1] is simply mapped to the set of all atoms *) Definition g_one' n := g_prd' n (@lsyntax.e_top _). Lemma teval_one n: teval (g_one' n) ≡ 1. Proof. unfold g_one'. rewrite teval_prd. apply inj_top. Qed. (** *** letters *) (** a Kleene variable [i] is mapped to the sum of all [f⋅i⋅g], for f,g arbitrary atoms *) Definition g_var' i := \sup_(f<_) \sup_(g<_) [g_elem f (g_var i) g]%list. Lemma sum_atoms n: \sup_(i guards n p := match x with | g_pred a => fun p y => match y with | g_pred b => if eqb a b then [g_pred a] else [] | g_elem b e c => if eqb a b then [g_elem b e c] else [] end | g_elem a e b => fun p y => match y with | g_pred c => fun e => if eqb b c then [g_elem a e b] else [] | g_elem c f d => fun e => if eqb b c then [g_elem a (e⋅g_atom _ b⋅f) d] else [] end e end%list. (** [g_dot' h k] does the composition of two lists of externally guarded terms *) Definition g_dot' n m p (h: guards n m) (k: guards m p) := \sup_(x\in h) \sup_(y\in k) g_dot1 x y. (** the correctness of this construction relies on the following two lemma *) Lemma empty_atom_dot n a b: a<>b -> g_atom n a ⋅ g_atom n b ≡ 0. Proof. intros. setoid_rewrite <-inj_cap. rewrite (empty_atom_cap H). apply inj_bot. Qed. Lemma idem_atom_dot n a: g_atom n a ⋅ g_atom n a ≡ g_atom n a. Proof. setoid_rewrite <-inj_cap. now rewrite capI. Qed. (** correctness of [g_dot1] *) Lemma geval_dot n m p (x: guard n m) (y: guard m p): teval (g_dot1 x y) ≡ geval x ⋅ geval y. Proof. destruct x as [? a|? ? a e b]; destruct y as [? c|? ? c f d]; unfold g_dot1; (case eqb_spec; [intros <-|intro E]); unfold geval; rewrite ?sup_singleton; unfold sup. - now rewrite idem_atom_dot. - symmetry. apply (empty_atom_dot _ E). - now rewrite 2dotA, idem_atom_dot. - rewrite 2dotA, (empty_atom_dot _ E). ra. - now rewrite <-3dotA, idem_atom_dot. - rewrite <-2dotA, (empty_atom_dot _ E). ra. - transitivity (g_atom _ a⋅(e⋅(g_atom _ b⋅g_atom _ b)⋅f)⋅g_atom _ d). 2:ra. now rewrite idem_atom_dot. - transitivity (g_atom _ a⋅(e⋅(g_atom _ b⋅g_atom _ c)⋅f)⋅g_atom _ d). 2:ra. rewrite (empty_atom_dot _ E). ra. Qed. (** correctness of [g_dot'] *) Lemma teval_dot n m p (x: guards n m) (y: guards m p): teval (g_dot' x y) ≡ teval x ⋅ teval y. Proof. unfold g_dot'. rewrite sup_sup. setoid_rewrite sup_sup. setoid_rewrite geval_dot. rewrite dotsumx. now setoid_rewrite dotxsum. Qed. (** *** Kleene star *) (** Kleene star is defined by induction on the list of externally guarded terms, see Kozen and Smith' paper *) Definition fst n m (x: guard n m) := match x with g_pred a | g_elem a _ _ => a end. Definition lst n m (x: guard n m) := match x with g_pred a | g_elem _ _ a => a end. Definition g_inner_dot n m (x: guard n m): forall p, guard m p -> gregex n p := match x with | g_pred a => fun p y => match y with | g_pred b => 0 | g_elem b e c => if eqb a b ⊓ eqb a c then e else 0 end | g_elem a e b => fun p y => match y with | g_pred c => fun e => if eqb a b ⊓ eqb b c then e else 0 | g_elem c f d => fun e => if eqb b c ⊓ eqb a d then e⋅g_atom _ b⋅f else 0 end e end. Definition xitr n m (r: guard n m) q' := let rq' := g_dot' [r] q' in let a := fst r in let p := sup (@g_inner_dot _ _ r _) q' in g_dot' ([g_elem a (p⋅(g_atom _ a⋅p)^*) a]++g_one' _) rq'. Fixpoint g_str' n (x: guards n n) := match x with | [] => g_one' _ | r::q => let q' := g_str' q in q' ++ g_dot' q' (xitr r q') end. (** the correctness of this construction is substantially more involved than for the other ones *) Lemma geval_fst n m (r: guard n m): geval r ≡ g_atom _ (fst r) ⋅ geval r. Proof. destruct r. simpl fst; unfold geval. now rewrite idem_atom_dot. simpl fst; unfold geval. now rewrite 2dotA, idem_atom_dot. Qed. Lemma geval_lst n m (r: guard n m): geval r ≡ geval r ⋅ g_atom _ (lst r). Proof. destruct r. simpl fst; unfold geval. now rewrite idem_atom_dot. simpl fst; unfold geval. now rewrite <-3dotA, idem_atom_dot. Qed. Definition dirac n m: gregex n m. case (eqb_pos_spec n m). intros <-. exact 1. intros _. exact 0. Defined. Lemma dirac_refl n: dirac n n = 1. Proof. unfold dirac. case eqb_pos_spec. 2: congruence. intro. now rewrite cmp_eq_rect_eq. Qed. Lemma teval_inner_dot n m p (x: guard n m) (y: guard m p): dirac n p + g_atom _ (fst x) ⋅ g_inner_dot x y ⋅ g_atom _ (fst x) ≡ dirac n p + ofbool (eqb (fst x) (lst y)) ⋅ geval x ⋅ geval y. Proof. unfold g_inner_dot. revert p y. destruct x as [n a|n m a e b]; destruct y as [p c|p q c f d]; simpl fst; simpl lst; simpl geval. rewrite dirac_refl. case eqb. 2: ra. ra_normalise. setoid_rewrite <-inj_cap. rewrite <-inj_top, <-inj_cup. apply inj_weq. lattice. case eqb_spec; intro E. subst. case eqb_spec; intro E'. subst. simpl. ra_normalise. now rewrite idem_atom_dot. ra. simpl. ra_normalise. rewrite <-(dotA _ (g_atom p a)). rewrite empty_atom_dot by assumption. ra. case eqb_spec; intro E. subst. case eqb_spec; intro E'. subst. simpl. ra_normalise. now rewrite <-3dotA, idem_atom_dot. ra. simpl. case eqb_spec; intro E'. subst. ra_normalise. rewrite <-dotA. rewrite empty_atom_dot by congruence. ra. ra. case eqb_spec; intro E. subst. case eqb_spec; intro E'. subst. simpl. ra_normalise. now rewrite <-(dotA _ (g_atom p c) (g_atom p c)), idem_atom_dot. ra. simpl. case eqb_spec; intro E'. subst. ra_normalise. rewrite <-(dotA _ (g_atom p b) (g_atom p c)), empty_atom_dot by assumption. ra. ra. Qed. Lemma teval_xitr n m (r: guard n m) q: teval (xitr r q) ≡ (geval r ⋅ teval q)^+. Proof. unfold xitr. rewrite 2teval_dot, sup_app, 2sup_singleton, teval_one. symmetry. rewrite itr_str_l. rewrite (geval_fst r) at 2. rewrite <-(dotA (g_atom _ _)), str_dot. apply dot_weq. 2: reflexivity. rewrite cupC. unfold geval at 3. rewrite dotA. rewrite <-itr_str_l. rewrite itr_aea by now rewrite idem_atom_dot. rewrite <-str_itr. apply str_weq1. induction q as [|e q IH]. ra. simpl (sup _ _). rewrite 2dotxpls, 2dotplsx. rewrite <-(cupI 1), 2cupA, 2comm4. apply cup_weq. 2: assumption. clear IH. rewrite <-dirac_refl. rewrite teval_inner_dot. case eqb_spec; intro E. setoid_rewrite dot1x. rewrite E. now rewrite <-dotA, <-geval_lst. rewrite (geval_lst e), <-2dotA, empty_atom_dot by congruence. ra. Qed. Lemma teval_str n (x: guards n n): teval (g_str' x) ≡ (teval x)^*. Proof. induction x as [|r q IH]; simpl g_str'. rewrite teval_one. symmetry. apply str0. simpl (sup _ _). setoid_rewrite (cupC (geval r)). rewrite str_pls. rewrite sup_app, teval_dot, teval_xitr, IH. rewrite (str_itr (_⋅_)). ra. Qed. (** ** summing up the constructions, by induction *) Fixpoint hat n m (e: gregex n m): guards n m := match e with | g_zer _ _ _ => [] | g_prd _ _ p => g_prd' _ p | g_pls e f => hat e ⊔ hat f | g_dot e f => g_dot' (hat e) (hat f) | g_itr e => g_dot' (hat e) (g_str' (hat e)) | g_var i => g_var' i end. Theorem teval_hat n m (e: gregex n m): teval (hat e) ≡ e. Proof. induction e; simpl hat. reflexivity. apply teval_prd. apply teval_var. setoid_rewrite sup_app. now apply cup_weq. rewrite teval_dot. now apply dot_weq. rewrite teval_dot, teval_str, <-itr_str_l. now apply itr_weq. Qed. (** * Relationship between generalised regular expressions and (plain) regular expressions *) (** ** extended alphabet *) (** a letter in the extended alphabet is either a Kleene variable, a positive predicate variable, or a negative one. We moreover need to record the type of the corresponding test in the two latter cases *) Inductive letter := | l_pos (n: positive) (p: ord pred) | l_neg (n: positive) (p: ord pred) | l_var (i: Sigma). (** the above type can be retracted into positives: this saves us from proving KA completeness on an arbitrary alphabet (this would require a lot of polymorphic definitions) *) Definition lp (l: letter): positive := match l with | l_pos n x => mk_sum (inl (mk_sum (inl (mk_pair (n, mk_ord x))))) | l_neg n x => mk_sum (inl (mk_sum (inr (mk_pair (n, mk_ord x))))) | l_var i => mk_sum (inr i) end. Definition pl (p: positive): letter := match get_sum p with | inl p => match get_sum p with | inl p => let '(n,p) := get_pair p in match get_ord _ p with None => l_var 1 | Some x => l_pos n x end | inr p => let '(n,p) := get_pair p in match get_ord _ p with None => l_var 1 | Some x => l_neg n x end end | inr i => l_var i end. Lemma plp l: pl (lp l) = l. Proof. destruct l; unfold pl, lp; now rewrite !get_mk_sum, ?get_mk_pair, ?get_mk_ord. Qed. (** the retraction into positives also equips [letter] with a [cmpType] structure *) Definition compare_letter (a b: letter) := cmp (lp a) (lp b). Lemma compare_letter_spec a b: compare_spec (a=b) (compare_letter a b). Proof. unfold compare_letter. case cmp_spec; intro E; constructor. now rewrite <-(plp a), <-(plp b), E. congruence. congruence. Qed. Canonical Structure cmp_letter := mk_simple_cmp _ compare_letter_spec. (** typing extended letters: - predicate letters come with their type - Kleene letters use the typing environment *) Definition src' l := match l with | l_pos n _ | l_neg n _ => n | l_var i => s i end. Definition tgt' l := match l with | l_pos n _ | l_neg n _ => n | l_var i => t i end. (** ** regular expressions on the extended alphabet *) (** [expr3] is intuitively the set of typed regular expressions on [letter], while [uexpr3] is the set of untyped regular expressions on [letter] we use [uexpr3] rather than [regex] to use the untyping theorem (which we proved on generic expressions rather than regular expressions). we now define several maps between these representations: - [o : gregex n m -> expr3 n m] (injective) - [o': expr3 n m -> gregex] (partial, since expr3 has to many operations) - [v : expr3 n m -> regex] (type-erasing, partial for the same reasons) - [w : regex -> uexpr3] (injective, even if we do not prove it) - [u : expr3 n m -> uexpr3] (type-erasing, actually [untyping.erase]) and we prove the following properties: - [o'o = id] (yielding injectivity of [o]) - [wvo = uo] (allowing us to use the untyping theorem) *) Notation expr3 n m := (expr_ops src' tgt' BKA n m). Notation uexpr3 := (expr_ops (fun _ => xH) (fun _ => xH) BKA xH xH). (** ** [o: gregex n m -> expr3 n m] *) Section n. Variable n: positive. Import lsyntax. (** we need to push negation to leaves in Boolean expressions *) Fixpoint o_pred (x: test): expr3 n n := match x with | e_bot => 0 | e_top => 1 | e_cup x y => o_pred x + o_pred y | e_cap x y => o_pred x ⋅ o_pred y | e_neg x => o_npred x | e_var a => syntax.e_var (l_pos n a) end with o_npred (x: test): expr3 n n := match x with | e_bot => 1 | e_top => 0 | e_cup x y => o_npred x ⋅ o_npred y | e_cap x y => o_npred x + o_npred y | e_neg x => o_pred x | e_var a => syntax.e_var (l_neg n a) end. Import syntax. Lemma o_pred_level x: e_level (o_pred x) ≪ BKA with o_npred_level x: e_level (o_npred x) ≪ BKA. Proof. destruct x; simpl o_pred; simpl e_level; rewrite ?merge_spec; intuition. destruct x; simpl o_npred; simpl e_level; rewrite ?merge_spec; intuition. Qed. End n. Fixpoint o n m (e: gregex n m): expr3 n m:= match e with | g_zer _ _ _ => 0 | g_prd _ _ p => o_pred _ p | g_pls e f => o e + o f | g_dot e f => o e ⋅ o f | g_itr e => (o e)^+ | g_var j => e_var (l_var j) end. Lemma o_sup n m I J (f: I -> gregex n m): o (sup f J) = \sup_(i\in J) (o (f i)). Proof. apply f_sup_eq; now f_equal. Qed. Lemma o_level n m (e: gregex n m): e_level (o e) ≪ BKA. Proof. pose proof o_pred_level. induction e; simpl o; simpl e_level; rewrite ?merge_spec; intuition. Qed. (** ** [o': expr3 n m -> gregex n m] *) Definition o': forall n m, expr3 n m -> gregex n m := @eval _ src' tgt' (gregex_monoid_ops pred s t) id (fun l => match l return gregex (src' l) (tgt' l) with | l_pos n p => g_prd _ _ (lsyntax.e_var p) | l_neg n p => g_prd _ _ (! lsyntax.e_var p) | l_var i => g_var i end). Lemma o'o_pred: forall n (a: test), o' (o_pred n a) ≡ g_prd _ _ a with o'o_npred: forall n (a: test), o' (o_npred n a) ≡ g_prd _ _ (!a). Proof. destruct a. symmetry. apply inj_bot. symmetry. apply inj_top. setoid_rewrite inj_cup. apply cup_weq; apply o'o_pred. setoid_rewrite inj_cap. apply dot_weq; apply o'o_pred. apply o'o_npred. reflexivity. destruct a. symmetry. etransitivity. apply inj_weq. apply negbot. apply inj_top. symmetry. etransitivity. apply inj_weq. apply negtop. apply inj_bot. etransitivity. 2: apply inj_weq; symmetry; apply negcup. rewrite inj_cap. apply dot_weq; apply o'o_npred. etransitivity. 2: apply inj_weq; symmetry; apply negcap. rewrite inj_cup. apply cup_weq; apply o'o_npred. etransitivity. 2: apply inj_weq; symmetry; apply negneg. apply o'o_pred. reflexivity. Qed. (** [o] admits [o'] as left-inverse *) Lemma o'o: forall n m (e: gregex n m), o' (o e) ≡ e. Proof. induction e; simpl o; simpl o'. reflexivity. apply o'o_pred. reflexivity. now apply cup_weq. now apply dot_weq. now apply itr_weq. Qed. Lemma o'_weq n m: Proper (weq ==> weq) (@o' n m). Proof. intros ? ? H. apply (H (gregex_monoid_ops _ _ _) _ id). Qed. (** so that [o] is injective *) Corollary o_inj n m (e f: gregex n m): o e ≡ o f -> e ≡ f. Proof. intro H. apply o'_weq in H. revert H. now rewrite 2o'o. Qed. (** ** [expr3 -v-> regex -w-> uexpr3] *) Definition v: forall n m, expr3 n m -> regex := eval (f':=fun _ => regex_tt) (fun l => r_var (lp l)). Definition w (e: regex): uexpr3 := eval (X:=expr_ops _ _ BKA) (f':=fun _ => xH) (fun p => e_var (pl p)) (to_expr e). Lemma wv_u n m (e: expr3 n m): e_level e ≪ BKA -> w (v e) ≡ erase BKA e. Proof. unfold w. induction e; simpl e_level; intro Hl; try discriminate_levels; try (first [reflexivity|apply dot_weq|apply cup_weq|apply str_weq]); try (first [apply IHe1|apply IHe2|apply IHe]; solve_lower'). symmetry. simpl (erase _ _). rewrite itr_str_l, <-IHe. reflexivity. solve_lower'. simpl. now rewrite plp. Qed. (** key lemma to be able to use the untyping theorem *) Lemma wvo_uo n m (e: gregex n m): w (v (o e)) ≡ erase BKA (o e). Proof. apply wv_u, o_level. Qed. (* note: on doit pouvoir prouver une égalité forte *) Lemma w_weq: Proper (weq ==> weq) w. Proof. intros ? ? H. apply (H (expr_ops _ _ _) _ (fun _ => xH)). Qed. Lemma v_sup n m I J (f: I -> expr3 n m): v (sup f J) = \sup_(i\in J) (v (f i)). Proof. apply f_sup_eq; now f_equal. Qed. (** * From guarded string languages to languages on the extended alphabet *) (** i.e., we define a coercion from [glang] [lang] *) Notation word := (list positive). Notation lang := (lang_ops positive lang_tt lang_tt). (** converting an atom into a word of the extended alphabet. this word has length [pred]: each predicate variable appears exactly once, with its assigned truth value *) Definition atom_to_word n (a: Atom) := map (fun i => if set.mem a i then lp (l_pos n i) else lp (l_neg n i)) (seq pred). (** converting an guarded string into a word of the extended alphabet the resulting word has length [pred+(1+pred)⋅n], where [n] is the length of the guarded string *) Fixpoint gword_to_word n (w: gword) := match w with | tnil a => atom_to_word n a | tcons a i w => atom_to_word n a ++ lp (l_var i) :: gword_to_word (t i) w end. (** we convert a guarded string language by converting its words *) Definition gl n m (G: glang n m): lang := fun w => exists g, w = gword_to_word n g /\ proj1_sig G g. Instance gl_leq n m: Proper (leq ==> leq) (@gl n m). Proof. intros G G' H w [v [? Hv]]. exists v. split. assumption. apply H, Hv. Qed. Instance gl_weq n m: Proper (weq ==> weq) (@gl n m) := op_leq_weq_1. (** auxiliary definition for the following auxiliary lemma: [gword_to_word' w] return the suffix of the word corresponding to [w], where the initial atom has been omitted *) Definition gword_to_word' (w: gword) := match w with | tnil a => [] | tcons a i w => lp (l_var i) :: gword_to_word (t i) w end. Lemma gword_to_word_cut n w: gword_to_word n w = atom_to_word n (thead w) ++ gword_to_word' w. Proof. destruct w. apply app_nil_end. reflexivity. Qed. Lemma gword_tapp: forall x y z, tapp x y z -> forall n, gword_to_word n z = gword_to_word n x ++ gword_to_word' y. Proof. induction 1; simpl; intros n. apply app_nil_end. reflexivity. now rewrite IHtapp, app_ass. Qed. (** * 2. G (hat e) = R (hat e) (more formally, gl (G (hat e)) ≡ R (v (o (hat e)))) *) Notation R := R.lang. Notation G := G.lang. Notation latom n a := (eq (atom_to_word n a): lang). (** regular language corresponding to an atom *) Lemma R_lang_atom n a: R (v (o (g_atom n a))) ≡ latom n a. Proof. simpl o. unfold atom, atom_to_word. induction (seq pred). apply R.lang_1. simpl (sup _ _). setoid_rewrite R.lang_dot. setoid_rewrite (eq_app_dot _ [_]). apply dot_weq. case set.mem; apply R.lang_var. assumption. Qed. (** ** properties of [gl] *) (** [gl] is a semilattice morphism (note that it does not preserve products/iterations) *) Lemma gl_bot n m: @gl n m bot ≡ bot. Proof. split. intros [? [? []]]. intros []. Qed. Lemma gl_cup n m (e f: glang n m): gl (e ⊔ f) ≡ gl e ⊔ gl f. Proof. split. intros [w [-> [H|H]]]; [left|right]; exists w; split; auto. intros [H|H]; destruct H as [w [-> H]]; exists w; split; trivial; (now left) || (now right). Qed. Lemma gl_sup n m I J (f: I -> glang n m): gl (sup f J) ≡ \sup_(i\in J) gl (f i). Proof. apply f_sup_weq. apply gl_bot. apply gl_cup. Qed. (** [gl] maps atoms to atoms *) Lemma gl_atom n a: gl (tatom n a) ≡ latom n a. Proof. intro w; split. intros [g [-> <-]]. reflexivity. intros <-. eexists. split. 2: reflexivity. reflexivity. Qed. (** image of single letter traces under [gl] *) Lemma gl_single' a i b: gl (tsingle' a i b) ≡ eq (atom_to_word (s i) a++[lp (l_var i)]++atom_to_word (t i) b). Proof. intro w; split. intros [g [-> <-]]. reflexivity. intros <-. eexists. split. 2: reflexivity. reflexivity. Qed. (** key auxiliary lemma for composition: we need to use an atom as a cutting point *) Lemma gl_dot n m p (e: glang n m) a (f: glang m p) (e' f': lang): gl (e ⋅ tatom m a) ≡ e' ⋅ latom m a -> gl (tatom m a ⋅ f) ≡ latom m a ⋅ f' -> gl (e ⋅ tatom m a ⋅ f) ≡ e' ⋅ latom m a ⋅ f'. Proof. setoid_rewrite weq_spec. intros He Hf. split; intro w. - apply proj1 in He. apply proj1 in Hf. intros [g [-> [xa [x Hx [? <- Hxa]] [y Hy Hg]]]]. apply tapp_x_nil_eq in Hxa as [Ha ->]. destruct (tapp_bounds Hg) as (H1&H2&H3). (* TOSIMPL *) destruct (He (gword_to_word n x)) as [x' Hx' [? <- Hxa']]. repeat eexists; eauto. rewrite Ha. apply tapp_x_nil. destruct (Hf (gword_to_word m y)) as [? <- [y' Hy' Hay']]. repeat eexists; eauto. rewrite Ha. rewrite H1. apply tapp_nil_x. repeat eexists; eauto. rewrite app_ass, <-Hay', (gword_to_word_cut m y), (gword_tapp Hg), <-app_ass. congruence. - apply proj2 in He. apply proj2 in Hf. intros [ea [x Hx [? <- ->]] [y Hy ->]]. edestruct (He (x++atom_to_word m a)) as [xa [Hxa [x' Hx' [? <- Hxa']]]]. eexists; eauto. edestruct (Hf (atom_to_word m a++y)) as [ay [Hay [? <- [y' Hy' Hay']]]]. eexists; eauto. apply tapp_x_nil_eq in Hxa' as [-> ->]. apply tapp_nil_x_eq in Hay' as [Ha ->]. destruct (tapp_cat x' y' Ha) as [z Hz]. repeat eexists; eauto. 2: apply tapp_x_nil. rewrite (gword_tapp Hz), <-Hxa, app_ass, Hay, gword_to_word_cut, ass_app. congruence. Qed. Lemma gl_nildot a n m (e: glang n m): exists e', gl (tatom n a ⋅ e) ≡ latom n a ⋅ e'. Proof. exists (fun w => exists g, proj1_sig e g /\ thead g = a /\ w = gword_to_word' g). rewrite weq_spec. split; intro w. intros [u [-> [? <- [v Hv H]]]]. apply tapp_nil_x_eq in H as [-> ->]. repeat eexists; eauto. apply gword_to_word_cut. intros [? <- [u (x&He&<-&->) ->]]. repeat eexists; eauto. apply eq_sym, gword_to_word_cut. apply tapp_nil_x. Qed. (** key auxiliary lemma for iteration: we need to use an atom as bounds *) Lemma gl_itr n (e: glang n n) a (e': lang): gl (tatom n a ⋅ e ⋅ tatom n a) ≡ e' ⋅ latom n a -> gl ((tatom n a ⋅ e)^+ ⋅ tatom n a) ≡ e'^+ ⋅ latom n a. Proof. rename n into m. intro H. rewrite <-itr_dot. apply antisym. - apply weq_spec in H as [H _]. intros w [g [-> [? <- [u [n Hn] Hg]]]]. apply tapp_nil_x_eq in Hg as [Hu ->]. revert u Hu Hn. induction n; intros u Hu Hn. destruct Hn as [v Hv [? <- Hn]]. apply tapp_x_nil_eq in Hn as [Hn ->]. destruct (H (gword_to_word m v)) as [u He [? <- H']]. repeat eexists; eauto. rewrite Hu. apply tapp_nil_x. rewrite Hn. apply tapp_x_nil. exists u. now exists O. eauto. destruct Hn as [w [v Hv [? <- Hn]] [w' Hw' H']]. apply tapp_x_nil_eq in Hn as [Hn ->]. assert (H'' := tapp_bounds H'). (* TOSIMPL *) assert (Haw: a=thead w') by (rewrite Hn; intuition congruence). destruct (H (gword_to_word m v)) as [xe He [? <- Hxe]]. repeat eexists; eauto. rewrite Hu, <-(tapp_head H'). apply tapp_nil_x. rewrite Hn; apply tapp_x_nil. assert (Hext: e' ⋅ (e'^+ ⋅ latom m a) ≦ e'^+ ⋅ latom m a). now rewrite dotA, itr_cons. apply Hext. clear Hext. eexists. eassumption. eexists. apply IHn. apply Haw. assumption. rewrite (gword_tapp H'), Hxe, gword_to_word_cut, Haw, app_ass. congruence. - apply itr_ind_l. now rewrite <-H, <-itr_ext, dotA. rewrite <-itr_cons at 2. destruct (gl_nildot a ((e ⋅ tatom m a)^+)) as [f' Hf]. rewrite 2dotA. rewrite gl_dot by eassumption. now rewrite Hf, dotA. Qed. (** ** clean terms: those on which [G] and [R] coincide *) Definition clean1 n m (e: guard n m) := gl (G (geval e)) ≡ R (v (o (geval e))). Definition clean n m (e: guards n m) := forall g, In g e -> clean1 g. Lemma G_clean n m (e: guards n m): clean e -> gl (G (teval e)) ≡ R (v (o (teval e))). Proof. intro He. rewrite o_sup, v_sup, lang_sup, R.lang_sup, gl_sup. apply sup_weq'. reflexivity. intros g Hg. apply He, Hg. Qed. (** basic constructors for clean terms *) Lemma clean_bot n m: @clean n m bot. Proof. intros _ []. Qed. Lemma clean_cup n m (e f: guards n m): clean e -> clean f -> clean (e++f). Proof. unfold clean. setoid_rewrite in_app_iff. intuition. Qed. Lemma clean_single n m (g: guard n m): clean1 g -> clean [g]. Proof. now intros ? ? [<-|[]]. Qed. Lemma clean_sup n m I J (f: I -> guards n m): (forall i, In i J -> clean (f i)) -> clean (sup f J). Proof. apply P_sup. apply clean_bot. apply clean_cup. Qed. Lemma clean_map n m I J (f: I -> guard n m): (forall i, In i J -> clean1 (f i)) -> clean (map f J). Proof. rewrite map_sup. intro. apply clean_sup. intros. apply clean_single. auto. Qed. (** ** the basic ingredients of the [hat] function preserve cleaness *) Lemma clean_pred n a: clean1 (@g_pred n a). Proof. unfold clean1, geval. rewrite G.lang_atom, R_lang_atom. apply gl_atom. Qed. Lemma clean_elem_var a i b: clean1 (g_elem a (g_var i) b). Proof. unfold clean1, geval. rewrite 2G.lang_dot, 2G.lang_atom. setoid_rewrite atom_single_atom. rewrite gl_single'. rewrite 2eq_app_dot. do 2 setoid_rewrite R.lang_dot. setoid_rewrite R.lang_var. setoid_rewrite <- R_lang_atom. apply dotA. Qed. Lemma clean_dot' n m p (e: guards n m) (f: guards m p): clean e -> clean f -> clean (g_dot' e f). Proof. intros He Hf. apply clean_sup. intros x Hx. apply clean_sup. intros y Hy. apply He in Hx. apply Hf in Hy. clear e f He Hf. destruct x as [a|a e b]; destruct y as [c|c f d]; unfold g_dot1; (case eqb_spec; [intros <-|intro E]); try apply clean_bot; apply clean_single; trivial. revert Hx Hy. unfold clean1, geval. rewrite 8G.lang_dot, 3G.lang_atom. simpl o; simpl v. fold_regex. rewrite 8R.lang_dot. repeat change (o_pred ?a (atom ?b)) with (o (g_atom a b)). setoid_rewrite R_lang_atom. intros Hx Hy. rewrite 4dotA, <-dotA, <-(dotA _ _ (latom _ _)). apply gl_dot. assumption. rewrite 2dotA. assumption. Qed. Lemma clean_one' n: clean (g_one' n). Proof. apply clean_map. intros. apply clean_pred. Qed. Lemma clean_inner_dot n m (e: guard n m) (He: clean1 e): forall p (f: guard m p), clean1 f -> gl (tatom n (fst e) ⋅ G (g_inner_dot e f) ⋅ tatom p (fst e)) ≡ latom n (fst e) ⋅ R (v (o (g_inner_dot e f))) ⋅ latom p (fst e). Proof. assert (Z: forall n m p q (e: glang n m) (f: glang p q) e' f', gl (e⋅G 0⋅f) ≡ e'⋅R 0⋅f'). intros. rewrite G.lang_0, dotx0, dot0x, gl_bot. setoid_rewrite R.lang_0. ra. destruct e as [n a|n m a e b]; destruct f as [p c|p q c f d]; intros Hf; simpl fst; simpl lst; simpl g_inner_dot. - apply Z. - case eqb_spec. intros <-. 2: intros; apply Z. case eqb_spec. intros <-. 2: intros; apply Z. unfold andb. unfold clean1, geval in Hf. rewrite 2G.lang_dot, 2G.lang_atom in Hf. rewrite Hf. rewrite <-2R_lang_atom, <-2R.lang_dot. reflexivity. - case eqb_spec. intros <-. 2: intros; apply Z. case eqb_spec. intros <-. 2: intros; apply Z. unfold andb. unfold clean1, geval in He. rewrite 2G.lang_dot, 2G.lang_atom in He. rewrite He. rewrite <-2R_lang_atom, <-2R.lang_dot. reflexivity. - case eqb_spec. intros <-. 2: intros; apply Z. case eqb_spec. intros <-. 2: intros; apply Z. unfold andb. unfold clean1, geval in He. rewrite 2G.lang_dot, 2G.lang_atom in He. unfold clean1, geval in Hf. rewrite 2G.lang_dot, 2G.lang_atom in Hf. simpl o in *; simpl v in *. rewrite 2R.lang_dot in He. repeat change (o_pred ?a (atom ?b)) with (o (g_atom a b)) in He. setoid_rewrite R_lang_atom in He. rewrite 2R.lang_dot in Hf. repeat change (o_pred ?a (atom ?b)) with (o (g_atom a b)) in Hf. setoid_rewrite R_lang_atom in Hf. rewrite 2R.lang_dot. repeat change (o_pred ?a (atom ?b)) with (o (g_atom a b)). setoid_rewrite R_lang_atom. setoid_rewrite G.lang_dot. rewrite dotA. setoid_rewrite G.lang_dot. rewrite G.lang_atom. rewrite dotA. rewrite <-(dotA _ (G f)). rewrite <-2dotA in Hf. rewrite gl_dot by eassumption. ra. Qed. Lemma clean_str' n (e: guards n n): clean e -> clean (g_str' e). Proof. induction e; intro Hae; simpl g_str'. apply clean_one'. assert (He: clean e) by (intros ? ?; apply Hae; now right). specialize (IHe He). assert (Ha: clean1 a) by (apply Hae; now left). clear Hae. apply clean_cup. assumption. apply clean_dot'. assumption. apply clean_dot'. apply clean_cup. 2: apply clean_one'. 2: apply clean_dot'; [now apply clean_single | assumption]. revert IHe. generalize (g_str' e). clear e He. intros e He. apply clean_single. unfold clean1. unfold geval. simpl o; simpl v; fold_regex. rewrite 2dotA, <-str_itr, <-2itr_str_l. rewrite G.lang_dot, G.lang_itr, G.lang_dot, G.lang_sup, G.lang_atom. rewrite R.lang_dot, R.lang_itr, R.lang_dot, o_sup, v_sup, R.lang_sup. repeat change (o_pred ?a (atom ?b)) with (o (g_atom a b)). setoid_rewrite R_lang_atom. apply gl_itr. rewrite 2dotxsum, 2dotsumx, gl_sup. apply sup_weq'. reflexivity. intros f Hf. apply clean_inner_dot. apply Ha. apply He, Hf. Qed. (** ** the [hat] function produces clean terms *) Theorem clean_hat n m (e: gregex n m): clean (hat e). Proof. induction e; simpl hat. apply clean_bot. apply clean_map. intros ? _. apply clean_pred. apply clean_sup. intros ? _. apply clean_sup. intros ? _. apply clean_single, clean_elem_var. now apply clean_cup. now apply clean_dot'. apply clean_dot'. assumption. apply clean_str'. assumption. Qed. (** whence the desired property, as a corollary *) Corollary G_hat n m (e: gregex n m): gl (G e) ≡ R (v (o (teval (hat e)))). Proof. rewrite <-(teval_hat e) at 1. apply G_clean, clean_hat. Qed. (** * KAT completeness *) (** we assemble all the pieces as explained at the beginning of this file *) Theorem kat_complete_weq n m: forall e f: gregex n m, G e ≡ G f -> e ≡ f. Proof. intros e f H. apply gl_weq in H. rewrite 2G_hat in H. apply ka_complete_weq in H. apply w_weq in H. rewrite 2wvo_uo in H. apply erase_faithful_weq in H. 2: now rewrite 2o_level. 2: reflexivity. apply o_inj in H. rewrite 2teval_hat in H. assumption. Qed. (** we deduce a similar result for language inclusions *) Corollary kat_complete_leq n m: forall e f: gregex n m, G e ≦ G f -> e ≦ f. Proof. intros e f. rewrite 2leq_iff_cup, <-G.lang_pls. apply kat_complete_weq. Qed. (** and the above implications actually are equivalences *) Corollary kat_correct_complete_weq n m: forall e f: gregex n m, G e ≡ G f <-> e ≡ f. Proof. split. apply kat_complete_weq. apply G.lang_weq. Qed. Corollary kat_correct_complete_leq n m: forall e f: gregex n m, G e ≦ G f <-> e ≦ f. Proof. split. apply kat_complete_leq. apply G.lang_leq. Qed. (** * KAT decidability *) (** additional corollaries (not used): - KAT proofs reduce to KA proofs - KAT equality is decidable *) Corollary kat_reduces_to_ka n m: forall e f: gregex n m, e ≡f <-> v (o (teval (hat e))) ≡ v (o (teval (hat f))). Proof. intros. split; intro H. apply ka_complete_weq. now rewrite <-2G_hat, H. apply w_weq in H. rewrite 2wvo_uo in H. apply erase_faithful_weq in H. 2: now rewrite 2o_level. 2: reflexivity. apply o_inj in H. now rewrite 2teval_hat in H. Qed. Corollary kat_dec n m: forall e f: gregex n m, {e ≡f} + {~(e ≡f)}. Proof. intros. eapply sumbool_iff. symmetry. apply kat_reduces_to_ka. apply ka_weq_dec. Qed. End KatCompleteness. relation-algebra-v.1.7.9/theories/kat_reification.v000066400000000000000000000201641440504774100223610ustar00rootroot00000000000000(** * kat_reification: various definitions to ease the KAT reification process *) (** We reify KAT terms as [syntax.expr] trees over an alphabet including [lsyntax.expr] for Boolean tests. This allows us to reuse the tools developped for ra_{normalise,reflexivity}, and this gives us more flexibility Here define the conversion function from this syntax to [gregex], as well as dependently typed positive maps to ease the definition of the environment for predicate variables *) Require Import lsyntax ordinal lset. Require Export positives kat gregex syntax. Section s. Notation Idx := positive. (* type indices *) Notation Sigma := positive. (* Kleene variable indices *) Notation Pred := positive. (* predicate variables indices *) Context {X: kat.ops}. (** * Dependently types positive maps *) (** we use dependently typed binary trees: [v f'] is used to repredent dependent functions of type [forall n, Pred -> tst (f' n)] *) Inductive v: (Idx -> ob X) -> Type := | v_L: forall f', v f' | v_N: forall f', v (fun n => f' (xO n)) -> (Pred -> tst (f' xH)) -> v (fun n => f' (xI n)) -> v f'. (** [v_get] transforms such a tree into the corresponding function, using [lattice.top] to fill the gaps *) Fixpoint v_get f' (t: v f') n p: tst (f' n) := match t with | v_L _ => lattice.top | v_N _ l m r => match n with | xO n => v_get _ l n p | xI n => v_get _ r n p | xH => m p end end. (** [v_add f' t n x] adds the binding [(n,x)] to [t] *) Fixpoint v_add f' (t: v f') n: (Pred -> tst (f' n)) -> v f' := match t with | v_L f' => match n with | xH => fun x => v_N _ (v_L _) x (v_L _) | xO n => fun x => v_N _ (v_add (fun n => f' (xO n)) (v_L _) n x) (fun _ => lattice.top) (v_L _) | xI n => fun x => v_N _ (v_L _) (fun _ => lattice.top) (v_add (fun n => f' (xI n)) (v_L _) n x) end | v_N f' l y r => match n with | xH => fun x => v_N _ l x r | xO n => fun x => v_N _ (v_add (fun n => f' (xO n)) l n x) y r | xI n => fun x => v_N _ l y (v_add (fun n => f' (xI n)) r n x) end end. (** * Syntax of KAT reification *) (** we use packages for Kleene variables, as for [ra_refication] *) Context {f': Idx -> ob X} {fs: Sigma -> Pack X f'}. (** reified KAT expressions are just generic expressions ([syntax.expr]) over the following alphabet, which includes [lsyntax.expr] for Boolean expressions. The index type of the Boolean expressions is declared only once, at the [e_var] nodes switching from [syntax.expr] to [lsyntax.expr] *) Definition var := (Sigma + Idx * lsyntax.expr Pred)%type. (** typing functions for this alphabet: use the packages for Kleene variables, and the declared type for Boolean expressions *) Definition s' (a: var) := match a with inl a => src_ fs a | inr (n,_) => n end. Definition t' (a: var) := match a with inl a => tgt_ fs a | inr (n,_) => n end. (** final type of reified KAT expressions *) Definition kat_expr := expr s' t'. (** additional constructors: - [e_inj n p] injects a Boolean expression [p], at type [n] - [e_var i] produces a Kleene variable - [p_var a] produces a predicat variable *) Definition e_inj n p: kat_expr n n := e_var (inr (n,p)). Definition e_var i: kat_expr (src_ fs i) (tgt_ fs i) := e_var (inl i). Definition p_var := @lsyntax.e_var Pred. Section e. (** * Interpretation of KAT reified expressions *) (** we use the interpretations functions [syntax.expr] and [lsyntax.expr], with the appropriate substitutions *) Variable fp: forall n, Pred -> tst (f' n). Definition eval: forall n m, kat_expr n m -> X (f' n) (f' m) := syntax.eval (fun a => match a return X (f' (s' a)) (f' (t' a)) with | inl a => val (fs a) | inr (n,p) => inj (lsyntax.eval (fp n) p) end). End e. (** * Predicate variables of a KAT expression *) Fixpoint vars {n m} (e: kat_expr n m): list Pred := match e with | e_zer _ _ | e_top _ _ | e_one _ => [] | e_pls x y | e_cap x y | e_ldv x y | e_rdv x y | e_dot x y => union (vars x) (vars y) | e_neg x | e_itr x | e_str x | e_cnv x => vars x | syntax.e_var (inr (_,x)) => lsyntax.vars x | syntax.e_var _ => [] end. (** index of an element in a list, as an ordinal. Together with the list of all predicate variables indices appearing in a reified term, this function will allow us to convert positive indices to ordinal ones, as required byb [gregex]. *) Fixpoint idx (x: Pred) (l: list Pred): option (ord (length l)) := match l with | [] => None | y::q => if eqb_pos x y then Some ord0 else match idx x q with Some i => Some (ordS i) | None => None end end. (** * From reified KAT expressions to [gregex] *) (** we interpret the expression in the [gregex] model, mapping all Boolean expressions with positives variables to their counterpart with ordinal variables *) Definition to_gregex (l: list positive): forall n m, kat_expr n m -> gregex (length l) (src_ fs) (tgt_ fs) n m := syntax.eval (fun a => match a return gregex _ (src_ fs) (tgt_ fs) (s' a) (t' a) with | inl a => g_var a | inr (n,p) => g_prd _ _ (* here we just map all predicate variable indices into their ordinal counterpart *) (lsyntax.eval (fun x => match idx x l with | Some i => lsyntax.e_var i | None => lsyntax.e_top: lsyntax.expr_ BL end) p) end). (** correctness of the [idx] function *) Lemma in_idx x l: In x l -> exists j, idx x l = Some j /\ nth j l xH = x. Proof. induction l. intros []. intro Hl. simpl idx. case eqb_spec. intros <-. exists ord0. now split. destruct Hl as [<-|Hl]. congruence. intros _. apply IHl in Hl as [j [-> Hj]]. exists (ordS j). split. reflexivity. destruct j. assumption. Qed. (** factorisation of the interpretation function, through [gregex] since the type of expressions is slightly to big (it contains residuals and converse, for instance), we have to check that the expression does not use such constructions, whence the first hypothesis. *) Lemma to_gregex_eval {L: laws X} v n m e fp: e_level e ≪ BKA -> vars e ≦ v -> eval fp n m e ≡ gregex.eval f' (fun n i => fp n (nth i v xH)) (fun i => val (fs i)) (to_gregex v n m e). Proof. induction e; simpl e_level; simpl (vars _); intros Hl Hv; try discriminate_levels; simpl; rewrite ?union_app in Hv. reflexivity. symmetry. apply kat.inj_top. apply cup_weq. apply IHe1. solve_lower'. rewrite <- Hv. lattice. apply IHe2. solve_lower'. rewrite <- Hv. lattice. apply dot_weq. apply IHe1. solve_lower'. rewrite <- Hv. lattice. apply IHe2. solve_lower'. rewrite <- Hv. lattice. apply itr_weq, IHe. solve_lower'. assumption. rewrite kat.inj_top, <-str_itr. apply str_weq, IHe. solve_lower'. assumption. destruct a as [a|[n p]]. reflexivity. simpl. apply kat.inj_weq. clear Hl. induction p; simpl; simpl (lsyntax.vars _) in Hv; rewrite ?union_app in Hv. reflexivity. reflexivity. apply cup_weq. apply IHp1. rewrite <-Hv. lattice. apply IHp2. rewrite <-Hv. lattice. apply cap_weq. apply IHp1. rewrite <-Hv. lattice. apply IHp2. rewrite <-Hv. lattice. apply neg_weq, IHp. assumption. simpl. destruct (in_idx a v) as [j [-> Hj]]. apply Hv; now left. simpl. now rewrite Hj. Qed. (** as a corollary, we get the reification lemma (The hypothesis that expressions do not use forbidden symbols will be easily checked by reflection, dynamically.) *) Corollary to_gregex_weq {L: laws X} n m e f fp: e_level e + e_level f ≪ BKA -> (let v := vars (e_pls e f) in to_gregex v n m e ≡ to_gregex v n m f) -> eval fp n m e ≡ eval fp n m f. Proof. intros Hl H. rewrite 2(to_gregex_eval (vars (e_pls e f))) by (solve_lower' || simpl (vars _); rewrite union_app; lattice). apply H, L. Qed. End s. Arguments e_inj _ _ _ _%positive _%last. Arguments e_var _ _ _ _%positive. Arguments p_var _%positive. (** Load ML reification modules *) Declare ML Module "coq-relation-algebra.kat". relation-algebra-v.1.7.9/theories/kat_tac.v000066400000000000000000000206071440504774100206360ustar00rootroot00000000000000(** * kat_tac: decision tactics for KA, KAT, and KAT with Hoare hypotheses *) Require Import boolean ugregex_dec kat_reification kat_untyping kat_completeness. (** * corollary of kat_untyping and kat_completeness: one can decide typed KAT by reasoning about untyped languages *) Section j. Notation Sigma := positive. Variable Pred: nat. Variables src tgt: Sigma -> positive. Corollary kat_untype_weq n m (e f: gregex Pred src tgt n m): gerase e ≡ gerase f <-> e ≡ f. Proof. split. intro H. apply kat_complete_weq. rewrite 2untype_glang. now apply traces.restrict_weq. apply gerase_weq. Qed. End j. (** we can thus refine the reification lemma provided in [kat_reification] *) Corollary kat_weq_dec `{L: laws} f' fs fp n m (e f: @kat_expr X f' fs n m): e_level e + e_level f ≪ BKA -> (let v := vars (e_pls e f) in eqb_kat (gerase (to_gregex v n m e)) (gerase (to_gregex v n m f)) = Some true) -> eval fp n m e ≡ eval fp n m f. Proof. intros Hl H. apply to_gregex_weq. assumption. apply kat_untype_weq, eqb_kat_correct, H. Qed. (** * [kat] tactic, for Kleene algebra with tests *) (** the [kat] tactic solves KAT (in)equalities; it proceeds as follows: 0. possibly convert inequalities into equalities, using [leq_iff_cup]; 1. look for the kat structure and laws, by calling typeclass resolution using ([catch_kat_weq]); 2. call the ML reification tactic -> the equivalence will be checked in OCaml, an a counter-example will be displayed if it does not hold -> this will otherwise produce a sequence of let-in's for the various reification ingredients 3. introduce those ingredients and apply the above reification lemma 4. close the two generated subgoals by reflexivity (the first one corresponds to checking that reified terms did no include forbidden constructions - it should never fail unless there is a bug in reification; the second one corresponds to the execution of the Coa KAT decision algorithm - it should never fail since the equation was already checked in OCaml) *) Lemma catch_kat_weq {X} {L: laws X} n m (x y: X n m): (let L:=L in x ≡ y) -> x ≡ y. Proof. trivial. Qed. (** parametrised tactic to do the work share by [ka], [kat], and [hkat] [b] indicates whether we are in KAT (true) or KA (false) *) Ltac pre_dec b := let L := fresh "L" in intro L; (* L is the kat laws instance *) let tenv := fresh "tenv" in let env := fresh "env" in let penv := fresh "penv" in let lhs := fresh "lhs" in let rhs := fresh "rhs" in ra_kat_reify b; intros tenv env penv ? ? lhs rhs; apply (@kat_weq_dec _ L tenv env penv _ _ lhs rhs); [ (reflexivity || fail 1 "Bug in KAT reification, please report") | (close_by_reflection (Some true) || fail 1 "Not a KAT theorem, but no counter-example can be displayed. Please report.") ]. (** [kat] tactic *) Ltac kat := intros; rewrite ?leq_iff_cup; (apply catch_kat_weq || fail "could not find a KAT structure"); pre_dec true. (** * [ka] tactic, for Kleene algebra *) (** we use the fact that any Kleene algebras with bottom faithfully embed into KAT, using trivial tests *) (* TODO: sync with conservativity once we got it *) Module bka_to_kat. Definition ops (X: monoid.ops) := mk_ops X (fun _ => bool_lattice_ops) (fun _ => ofbool). Lemma laws `{monoid.laws} `{BKA ≪ l}: kat.laws (ops X). Proof. constructor. apply lower_laws. intro. apply lower_lattice_laws. intro. constructor; try discriminate. apply ofbool_leq. apply op_leq_weq_1. intros _ ? ?. apply orb_pls. intros _. reflexivity. intros ?. reflexivity. intros ? ? ?. apply andb_dot. Qed. End bka_to_kat. (** the tactic is really similar to [kat], except that we catch the KAT laws using the lemma below, exploiting the above embedding *) Lemma catch_ka_weq {X l} {L: monoid.laws l X} {Hl: BKA ≪ l} n m (x y: X n m): (let L:=@bka_to_kat.laws l X L Hl in @weq (@kar (bka_to_kat.ops X) n m) x y) -> x ≡ y. Proof. trivial. Qed. Ltac ka := intros; rewrite ?leq_iff_cup; (apply catch_ka_weq || fail "could not find a KA structure"); pre_dec false. (** * [hkat] tactic, for KAT with Hoare hypotheses *) (** Hypotheses of the shape [x ≡ 0], called "Hoare hypotheses", can be eliminated in KAT. In other words, the Horn theory of KAT restricted to the clauses wher all hypotheses have the above shape, reduces to the equational theory of KAT, and is thus decidable. Moreover, some other kinds of hypotheses can be transformed into Hoare ones, and hypotheses of the shape [[c];p ≡ [c]] can also be eliminated. All in all, a non-trivial class of hypotheses can be handled in practice. We perform such an elimination of hypotheses to obtain the [hkat] tactic below. For more details, see Chris Hardin and Dexter Kozen. "On the elimination of hypotheses in Kleene algebra with tests". TR2002-1879, Computer Science Department, Cornell University, October 2002 *) (** converting various kinds of hypotheses to Hoare ones *) Lemma ab_to_hoare `{L: laws} {n} (b c: tst n): b ≡ c -> [b ⊓ !c ⊔ !b ⊓ c] ≦ 0. Proof. intro H. rewrite H. kat. Qed. Lemma ab'_to_hoare `{L: laws} {n} (b c: tst n): b ≦ c -> [b ⊓ !c] ≦ 0. Proof. intro H. rewrite H. kat. Qed. (* note: les quatre implications suivantes ne sont complètes que pour p=q *) Lemma bpqc_to_hoare `{L: laws} {n m} (b: tst n) (c: tst m) p q: [b]⋅p ≦ q⋅[c] -> [b]⋅p⋅[!c] ≦ 0. Proof. intro H. rewrite H. kat. Qed. Lemma pbcq_to_hoare `{L: laws} {n m} (b: tst n) (c: tst m) p q: p⋅[b] ≦ [c]⋅q -> [!c]⋅p⋅[b] ≦ 0. Proof. rewrite <-dotA. dual @bpqc_to_hoare. Qed. Lemma qpc_to_hoare `{L: laws} {n m} (c: tst m) (p q: X n m): q ≦ p⋅[c] -> q⋅[!c] ≦ 0. Proof. intro H. rewrite H. kat. Qed. Lemma qcp_to_hoare `{L: laws} {n m} (c: tst m) (p q: X m n): q ≦ [c]⋅p -> [!c]⋅q ≦ 0. Proof. dual @qpc_to_hoare. Qed. (* TOTHINK: comprendre modulo A, et peut-être revenir aux cas complets (note: les deux dernières sont équivalentes aux deux premières, mais pas prises dans le cas complet) *) Lemma cp_c `{L: laws} {n} (c: tst n) (p: X n n): [c]⋅p ≡ [c] -> p ≡ [!c]⋅p+[c]. Proof. intro H. rewrite <-H. kat. Qed. Lemma pc_c `{L: laws} {n} (c: tst n) (p: X n n): p⋅[c] ≡ [c] -> p ≡ p⋅[!c]+[c]. Proof. dual @cp_c. Qed. (** merging Hoare hypotheses *) Lemma join_leq `{lattice.laws} `{CUP ≪ l} (x y z: X): x ≦z -> y ≦z -> x ⊔ y ≦z. Proof. rewrite cup_spec. tauto. Qed. (** eliminating Hoare hypotheses ; [u] and [v] are intended to be the universal expressions of the appropriate type *) Lemma elim_hoare_hypotheses_weq `{L: laws} {n m p q} (u: X n p) (v: X q m) (z: X p q) (x y: X n m): z ≦ 0 -> x+u⋅z⋅v ≡ y+u⋅z⋅v -> x ≡y. Proof. rewrite leq_xb_iff. intro Hz. now rewrite Hz, dotx0, dot0x, 2cupxb. Qed. Lemma elim_hoare_hypotheses_leq `{L: laws} {n m p q} (u: X n p) (v: X q m) (z: X p q) (x y: X n m): z ≦ 0 -> x ≦ y+u⋅z⋅v -> x ≦y. Proof. intro Hz. now rewrite Hz, dotx0, dot0x, cupxb. Qed. (** tactic used to aggregate Hoare hypotheses: convert hypotheses into Hoare ones using the above lemas, then merge them into a single one *) Ltac aggregate_hoare_hypotheses := repeat match goal with | H: _ ≡ _ |- _ => apply ab_to_hoare in H || (rewrite (cp_c _ _ H); clear H) || (rewrite (pc_c _ _ H); clear H) || apply weq_spec in H as [? ?] end; repeat match goal with | H: _ ≦ _ |- _ => apply ab'_to_hoare in H || apply bpqc_to_hoare in H || apply pbcq_to_hoare in H || apply qcp_to_hoare in H || apply qpc_to_hoare in H | H: _ ≦ 0, H': _ ≦ 0 |- _ => apply (join_leq _ _ _ H') in H; clear H' end. (** final [hkat] tactic: aggregate Hoare hypotheses, possibly transform inequalities into equalities, get the alphabet to construct the universal expression, eliminate Hoare hypotheses using the above lemma, and finally call the [kat] tactic *) Ltac hkat := intros; aggregate_hoare_hypotheses; rewrite ?leq_iff_cup; (apply catch_kat_weq || fail "could not find a KAT structure"); let L := fresh "L" in intro L; let u := fresh "u" in ((ra_get_kat_alphabet; intro u; eapply (elim_hoare_hypotheses_weq (u^*) (u^*)); [eassumption|]) || fail "typed hkat is not supported yet"); subst u; revert L; pre_dec true. relation-algebra-v.1.7.9/theories/kat_untyping.v000066400000000000000000000054441440504774100217460ustar00rootroot00000000000000(** * kat_untyping: untyping theorem for KAT *) (** We prove a strong untyping theorem for KAT: - types can be erased (as in the untyping theorem in [untyping]) - predicate variables of distinct types can be merged The proofs are quite simple since we can perform them at the level of guarded string languages: unlike for KA, we proved typed completeness of KAT w.r.t. these models. *) Require Import kat gregex ugregex ordinal positives glang. Set Implicit Arguments. Section s. Variable Pred: nat. Notation Atom := (ord (pow2 Pred)). Notation Sigma := positive. Variables src tgt: Sigma -> positive. Notation pred := (ord Pred). Notation gregex := (gregex_kat_ops Pred src tgt). Notation ugregex := (ugregex_monoid_ops Pred ugregex_tt ugregex_tt). Notation glang := (@gregex.lang Pred src tgt). Notation uglang := (@ugregex.lang Pred). Notation typed := (@typed' Atom src tgt). (** type-erasing function on extended regular expressions *) Fixpoint gerase n m (e: gregex n m): ugregex := match e with | g_zer _ _ _ => 0 | g_prd _ _ p => u_prd p | g_pls e f => gerase e + gerase f | g_dot e f => gerase e ⋅ gerase f | g_itr e => (gerase e)^+ | g_var i => u_var _ i end. (** charaterisation of the guarded string language of erased epressions *) Lemma uglang_gerase n m (e: gregex n m): uglang (gerase e) ≡ eval (fun _ => traces_tt) (fun _ => @lsyntax.e_var _) tsingle e. Proof. induction e; simpl gerase. apply lang_0. simpl. rewrite lsyntax.eval_var. reflexivity. reflexivity. now apply cup_weq. now apply dot_weq. now apply itr_weq. Qed. Corollary gerase_weq n m: Proper (weq ==> weq) (@gerase n m). Proof. intros ? ? H. simpl. unfold u_weq. rewrite 2uglang_gerase. apply (H _ _ _). Qed. (** the a priori untyped guarded string language of a typed gregex is necessarily typed *) Lemma typed_uglang_gerase n m (e: gregex n m): typed n m (uglang (gerase e)). Proof. induction e. intros [|] H. discriminate. destruct H. apply typed'_inj. apply typed'_single. revert IHe1 IHe2. apply typed'_cup. revert IHe1 IHe2. apply typed'_dot. revert IHe. apply typed'_itr. Qed. (** we can thus recover the typed language out of the untyped one *) Notation restrict := (restrict src tgt). Theorem untype_glang n m (e: gregex n m): glang e ≡ restrict n m (uglang (gerase e)). Proof. symmetry. induction e. setoid_rewrite lang_0. apply restrict_0. setoid_rewrite restrict_inj. simpl. unfold ttraces_weq. simpl. now rewrite lsyntax.eval_var. apply restrict_single. etransitivity. 2: apply cup_weq; eassumption. apply restrict_pls. etransitivity. 2: apply dot_weq; eassumption. apply restrict_dot; apply typed_uglang_gerase. etransitivity. 2: apply itr_weq; eassumption. apply restrict_itr; apply typed_uglang_gerase. Qed. End s. relation-algebra-v.1.7.9/theories/kleene.v000066400000000000000000000162411440504774100204720ustar00rootroot00000000000000(** kleene: simple facts about Kleene star *) (** and strict iteration *) Require Export monoid. Set Implicit Arguments. Unset Strict Implicit. (** * properties of Kleene star *) (** additional induction schemes *) Lemma str_ind_r' `{laws} `{STR ≪ l} n m (x: X n n) (y z: X m n): y ≦ z -> z⋅x ≦ z -> y⋅x^* ≦ z. Proof. dual @str_ind_l'. Qed. Lemma str_ind_r1 `{laws} `{STR ≪ l} n (x z: X n n): 1 ≦ z -> z ⋅ x ≦ z -> x ^* ≦ z. Proof. dual @str_ind_l1. Qed. Lemma str_unfold_r `{laws} `{KA ≪ l} n (x: X n n): x^* ≡ 1 + x^* ⋅ x. Proof. dual @str_unfold_l. Qed. (** bisimulation rules *) Lemma str_move_l `{laws} `{STR ≪ l} n m (x: X n m) y z: x ⋅ y ≦ z ⋅ x -> x ⋅ y^* ≦ z^* ⋅ x. Proof. intro E. apply str_ind_r'. now rewrite <-str_refl, dot1x. rewrite <-str_snoc at 2. now rewrite <-dotA, E, dotA. Qed. Lemma str_move_r `{laws} `{STR ≪ l} n m (x: X m n) y z: y ⋅ x ≦ x ⋅ z -> y^* ⋅ x ≦ x ⋅ z^*. Proof. dual @str_move_l. Qed. Lemma str_move `{laws} `{STR ≪ l} n m (x: X n m) y z: x ⋅ y ≡ z ⋅ x -> x ⋅ y^* ≡ z^* ⋅ x. Proof. intro. apply antisym. apply str_move_l. now apply weq_leq. apply str_move_r. now apply weq_geq. Qed. Lemma str_dot `{laws} `{STR ≪ l} n m (x: X n m) y: x ⋅ (y ⋅ x)^* ≡ (x ⋅ y)^* ⋅ x. Proof. apply str_move, dotA. Qed. (** [str] is uniquely determined *) Lemma str_unique `{laws} `{STR ≪ l} n (a x: X n n): 1 ≦x -> a⋅x ≦ x -> (forall y: X n n, a⋅y ≦y -> x⋅y ≦y) -> a^* ≡ x. Proof. intros H1 H2 H3. apply antisym. now apply str_ind_l1. rewrite <-(dotx1 x), (str_refl a). apply H3. apply str_cons. Qed. Lemma str_unique' `{laws} `{KA ≪ l} n (a x: X n n): 1+a⋅x ≦ x -> (forall y: X n n, a⋅y ≦y -> x⋅y ≦y) -> a^* ≡ x. Proof. rewrite cup_spec. intros []. now apply str_unique. Qed. (** value of [str] on constants *) Lemma str1 `{laws} `{STR ≪ l} n: 1^* ≡ one n. Proof. apply str_unique. reflexivity. now rewrite dotx1. trivial. Qed. Lemma str0 `{laws} `{STR+BOT ≪ l} n: 0^* ≡ one n. Proof. apply str_unique. reflexivity. rewrite dot0x. lattice. now intros; rewrite dot1x. Qed. Lemma strtop `{laws} `{STR+TOP ≪ l} n: top^* ≡ @top (mor n n). Proof. apply leq_tx_iff. apply str_ext. Qed. (** transitivity of starred elements *) Lemma str_trans `{laws} `{STR ≪ l} n (x: X n n): x^* ⋅ x^* ≡ x^*. Proof. apply antisym. apply str_ind_l; apply str_cons. rewrite <-str_refl at 2. now rewrite dot1x. Qed. (** [str] is involutive *) Lemma str_invol `{laws} `{STR ≪ l} n (x: X n n): x^*^* ≡ x^*. Proof. apply antisym. apply str_ind_l1. apply str_refl. now rewrite str_trans. apply str_ext. Qed. (** (de)nesting rule *) Lemma str_pls `{laws} `{KA ≪ l} n (x y: X n n): (x+y)^* ≡ x^*⋅(y⋅x^*)^*. Proof. apply str_unique. rewrite <-2str_refl. now rewrite dotx1. rewrite dotplsx. apply leq_cupx. now rewrite dotA, str_cons. rewrite <-(str_refl x) at 3. now rewrite dot1x, dotA, str_cons. intros z Hz. rewrite dotplsx in Hz. apply cup_spec in Hz as [Hxz Hyz]. rewrite <-dotA. apply str_ind_l'. 2: assumption. apply str_ind_l. rewrite <- Hyz at 2. rewrite <-dotA. apply dot_leq. reflexivity. now apply str_ind_l. Qed. Lemma str_pls' `{laws} `{KA ≪ l} n (x y: X n n): (x+y)^* ≡ (x^*⋅y)^*⋅x^*. Proof. rewrite str_pls. apply str_dot. Qed. Lemma str_pls_str `{laws} `{KA ≪ l} n (x y: X n n): (x+y)^* ≡ (x^* + y^* )^* . Proof. symmetry. rewrite str_pls, str_invol, <-str_pls. rewrite cupC. now rewrite str_pls, str_invol, <-str_pls, cupC. Qed. (** links with reflexive closure and reflexive elements *) Lemma str_pls1x `{laws} `{KA ≪ l} n (x: X n n): (1+x)^* ≡ x^*. Proof. now rewrite str_pls, str1, dot1x, dotx1. Qed. Lemma str_weq1 `{laws} `{KA ≪ l} n (x y: X n n): 1+x ≡ 1+y -> x^* ≡ y^*. Proof. rewrite <-(str_pls1x x), <-(str_pls1x y). apply str_weq. Qed. Lemma str_dot_refl `{laws} `{KA ≪ l} n (x y: X n n): 1 ≦x -> 1 ≦y -> (x⋅y)^* ≡ (x+y)^*. Proof. intros Hx Hy. apply antisym. - apply str_ind_l1. apply str_refl. rewrite <-2str_cons at 2. rewrite dotA. repeat apply dot_leq; lattice. - apply str_ind_l1. apply str_refl. rewrite <-str_cons at 2. apply dot_leq. 2: reflexivity. apply leq_cupx. now rewrite <-Hy, dotx1. now rewrite <-Hx, dot1x. Qed. (** * counterparts for strict iteration *) Lemma itr_ind_l `{laws} `{STR ≪ l} n m (x: X n n) (y z: X n m): x⋅y ≦ z -> x⋅z ≦ z -> x^+⋅y ≦ z. Proof. intros xy xz. rewrite itr_str_r, <-dotA, xy. now apply str_ind_l. Qed. Lemma itr_ind_l1 `{laws} `{STR ≪ l} n (x z: X n n): x ≦ z -> x⋅z ≦ z -> x^+ ≦ z. Proof. intros xz xz'. rewrite <-dotx1. apply itr_ind_l. now rewrite dotx1. assumption. Qed. Lemma itr_ind_r `{laws} `{STR ≪ l} n m (x: X n n) (y z: X m n): y⋅x ≦ z -> z⋅x ≦ z -> y⋅x^+ ≦ z. Proof. dual @itr_ind_l. Qed. #[export] Instance itr_leq `{laws} `{STR ≪ l} n: Proper (leq ==> leq) (itr n). Proof. intros x y E. now rewrite 2itr_str_l, E. Qed. #[export] Instance itr_weq `{laws} `{STR ≪ l} n: Proper (weq ==> weq) (itr n) := op_leq_weq_1. Lemma itr1 `{laws} `{STR ≪ l} n: 1^+ ≡ one n. Proof. now rewrite itr_str_l, str1, dot1x. Qed. Lemma itr0 `{laws} `{STR+BOT ≪ l} n: 0^+ ≡ zer n n. Proof. now rewrite itr_str_l, str0, dotx1. Qed. Lemma itr_ext `{laws} `{STR ≪ l} n (x: X n n): x ≦ x^+. Proof. now rewrite itr_str_l, <-str_refl, dotx1. Qed. Lemma itrtop `{laws} `{STR+TOP ≪ l} n: top^+ ≡ @top (mor n n). Proof. apply leq_tx_iff. apply itr_ext. Qed. Lemma itr_cons `{laws} `{STR ≪ l} n (x: X n n): x⋅x^+ ≦ x^+. Proof. rewrite itr_str_l. now rewrite str_cons at 1. Qed. Lemma itr_snoc `{laws} `{STR ≪ l} n (x: X n n): x^+⋅x ≦ x^+. Proof. dual @itr_cons. Qed. Lemma itr_pls_itr `{laws} `{KA ≪ l} n (x y: X n n): (x+y)^+ ≡ (x^+ + y^+)^+. Proof. apply antisym. apply itr_leq. now rewrite <-2itr_ext. apply itr_ind_l1. apply leq_cupx; apply itr_leq; lattice. rewrite dotplsx. apply leq_cupx; apply itr_ind_l; rewrite <-itr_cons at 2; apply dot_leq; lattice. Qed. Lemma itr_trans `{laws} `{STR ≪ l} n (x: X n n): x^+ ⋅ x^+ ≦ x^+. Proof. apply itr_ind_l; apply itr_cons. Qed. Lemma itr_invol `{laws} `{STR ≪ l} n (x: X n n): x^+^+ ≡ x^+. Proof. apply antisym. apply itr_ind_l1. reflexivity. apply itr_trans. apply itr_ext. Qed. Lemma itr_move `{laws} `{STR ≪ l} n m (x: X n m) y z: x ⋅ y ≡ z ⋅ x -> x ⋅ y^+ ≡ z^+ ⋅ x. Proof. intro E. rewrite itr_str_l, dotA, E, <-dotA, str_move by eassumption. now rewrite dotA, <-itr_str_l. Qed. Lemma itr_dot `{laws} `{STR ≪ l} n m (x: X n m) y: x⋅(y⋅x)^+ ≡ (x⋅y)^+⋅x. Proof. apply itr_move, dotA. Qed. (** this lemma is used for KAT completeness *) Lemma itr_aea `{laws} `{STR ≪ l} n (a e: X n n): a⋅a ≡a -> (a⋅e)^+⋅a ≡ (a⋅e⋅a)^+. Proof. intro Ha. rewrite (itr_str_l (a⋅e⋅a)), <-dotA, str_dot, (dotA a a), Ha. now rewrite dotA, <-itr_str_l. Qed. (** * converse and iteration commute *) Lemma cnvstr `{laws} `{CNV+STR ≪ l} n (x: X n n): x^*° ≡ x°^*. Proof. apply antisym. apply cnvstr_. cnv_switch. now rewrite cnvstr_, cnv_invol. Qed. Lemma cnvitr `{laws} `{CNV+STR ≪ l} n (x: X n n): x^+° ≡ x°^+. Proof. now rewrite itr_str_l, itr_str_r, cnvdot, cnvstr. Qed. relation-algebra-v.1.7.9/theories/lang.v000066400000000000000000000145001440504774100201440ustar00rootroot00000000000000(** * lang: the (flat) model of languages of finite words *) Require Import List. Require Export prop. Require Import monoid. (** singleton type for the objects of this flat structure *) CoInductive lang_unit := lang_tt. Section l. Variable X: Type. (** a language on [X] is a predicate on finite words with letters in [X] *) Definition lang := list X -> Prop. Implicit Types x y z: lang. Implicit Types n m p q: lang_unit. Notation tt := lang_tt. (** * Languages as a lattice *) (** lattice operations and laws are obtained for free, by pointwise lifting of the [Prop] lattice *) Canonical Structure lang_lattice_ops := lattice.mk_ops lang leq weq cup cap neg bot top. Global Instance lang_lattice_laws: lattice.laws (BDL+STR+DIV) lang_lattice_ops := lower_lattice_laws (H:=pw_laws _). (** * Languages a residuated Kleene lattice *) (** ** language operations *) (** language concatenation *) Definition lang_dot n m p x y: lang := fun w => exists2 u, x u & exists2 v, y v & w=u++v. (** languages left and right residuals *) Definition lang_ldv n m p x y: lang := fun w => forall u, x u -> y (u++w). Definition lang_rdv n m p x y: lang := fun w => forall u, x u -> y (w++u). (** language reduced to the empty word *) Definition lang_one n: lang := eq nil. (** language of reversed words *) Definition lang_cnv n m x: lang := fun w => x (rev w). (** finite iterations of a language (with a slight generalisation: [y⋅x^n]) *) Fixpoint iter i y x: lang := match i with O => y | S i => lang_dot tt tt tt x (iter i y x) end. (** strict iteration: union of finite iterations, starting with [x] *) Definition lang_itr n x: lang := fun w => exists i, iter i x x w. (** Kleene star: union of finite iterations, starting with [1] *) Definition lang_str n x: lang := fun w => exists i, iter i (lang_one n) x w. (** packing all operations in a canonical structure *) Canonical Structure lang_ops := mk_ops lang_unit _ lang_dot lang_one lang_itr lang_str lang_cnv lang_ldv lang_rdv. (** shorthand for [lang], when a morphism is expected *) Notation lang' := (lang_ops tt tt). (** ** languages form a residuated Kleene lattice *) (** auxiliary lemmas, to establish that languages form a residuated Kleene lattice *) Lemma lang_dotA n m p q x y z: lang_dot n m q x (lang_dot m p q y z) ≡ lang_dot n p q (lang_dot n m p x y) z. Proof. intro w. split. intros [u Hu [v [u' Hu' [v' Hv' ->]] ->]]. repeat eexists; eauto. apply ass_app. intros [u [u' Hu' [v' Hv' ->]] [v Hv ->]]. repeat eexists; eauto. apply app_ass. Qed. Lemma lang_dotx1 x: lang_dot tt tt tt x (lang_one tt) ≡ x. Proof. intro w. split. intros [u Hu [v <- ->]]. now rewrite <-app_nil_end. intro Hw. exists w; trivial. exists nil. reflexivity. apply app_nil_end. Qed. Lemma lang_dot_leq n m p: Proper (leq ==> leq ==> leq) (lang_dot n m p). Proof. intros x y H x' y' H' w [u Hu [v Hv Hw]]. exists u. apply H, Hu. exists v. apply H', Hv. assumption. Qed. Lemma lang_iter_S i x: iter i x x ≡ iter (S i) (lang_one tt) x. Proof. induction i; simpl iter. symmetry. apply lang_dotx1. now apply (op_leq_weq_2 (Hf:=@lang_dot_leq _ _ _)). Qed. (** languages form a residuated Kleene lattice (we do not have an allegory, since the converse operation does not satisfy the law [x ≦x⋅x°⋅x]) *) Global Instance lang_laws: laws (BDL+STR+DIV) lang_ops. Proof. constructor; (try (intro; discriminate)); (try now left); repeat right; intros. apply lower_lattice_laws. apply lang_dotA. intro w. split. now intros [u <- [v Hv ->]]. intro Hw. exists nil. reflexivity. now exists w. apply lang_dotx1. intros w Hw. now exists O. intros w [u Hu [v [i Hi] ->]]. exists (S i). repeat eexists; eauto. intros w [u [i Hu] [v Hv ->]]. revert u Hu. induction i. now intros u <-. intros u [u' Hu' [u'' Hu'' ->]]. apply H0. rewrite app_ass. eexists; eauto. intro w. split. intros [i H']. apply lang_iter_S in H' as [? ? [? ? ?]]. repeat eexists; eauto. intros [? ? [? [i H'] ?]]. exists i. apply lang_iter_S. repeat eexists; eauto. split; intros E w. intros [u xu [v xv ->]]. now apply E. intros Hw u Hu. apply E. repeat eexists; eauto. split; intros E w. intros [u xu [v xv ->]]. now apply E. intros Hw u Hu. apply E. repeat eexists; eauto. Qed. (** empty word property for concatenated languages *) Lemma lang_dot_nil (L L': lang'): (L⋅L')%ra nil <-> L nil /\ L' nil. Proof. split. 2:firstorder. intros [h H [k K E]]. apply eq_sym, List.app_eq_nil in E. intuition congruence. Qed. (** concatenation of singleton languages *) Lemma eq_app_dot u v: eq (u++v) ≡ (eq u: lang') ⋅ (eq v: lang'). Proof. split. intros <-. repeat eexists; eauto. now intros [? <- [? <- <-]]. Qed. (** * Language derivatives *) Definition lang_deriv a (L: lang'): lang' := fun w => L (a::w). Lemma lang_deriv_0 a: lang_deriv a 0 ≡ 0. Proof. firstorder. Qed. Lemma lang_deriv_1 a: lang_deriv a 1 ≡ 0. Proof. compute. intuition discriminate. Qed. Lemma lang_deriv_pls a (H K: lang'): lang_deriv a (H+K) ≡ lang_deriv a H + lang_deriv a K. Proof. intro. now apply cup_weq. Qed. Lemma lang_deriv_dot_1 a (H K: lang'): H nil -> lang_deriv a (H⋅K) ≡ lang_deriv a H ⋅ K + lang_deriv a K. Proof. intros Hnil w; simpl; unfold lang_deriv, lang_dot. split. intros [[|b u] Hu [v Kv E]]; simpl in E. right. now rewrite E. injection E; intros -> <-; clear E. left. repeat eexists; eauto. intros [[u Hu [v Kv ->]]|Ka]; repeat eexists; eauto. Qed. Lemma lang_deriv_dot_2 a (H K: lang'): ~ (H nil) -> lang_deriv a (H⋅K) ≡ lang_deriv a H ⋅ K. Proof. intros Hnil w; simpl; unfold lang_deriv, lang_dot. split. intros [[|b u] Hu [v Kv E]]; simpl in E. tauto. injection E; intros -> <-; clear E. repeat eexists; eauto. intros [u Hu [v Kv ->]]; repeat eexists; eauto. Qed. Lemma lang_deriv_str a (H: lang'): lang_deriv a (H^*) ≡ lang_deriv a H ⋅ H^*. Proof. intro w. split. intros [n Hn]. induction n in a, w, Hn; simpl in Hn. discriminate. destruct Hn as [[|b v] Hv [u Hu Hw]]; simpl in Hw. rewrite <- Hw in Hu. apply IHn, Hu. injection Hw; intros -> ->; clear Hw. repeat eexists; eauto. intros [u Hu [v [n Hv] ->]]. exists (S n). repeat eexists; eauto. Qed. End l. Arguments lang_deriv {X}. Notation lang' X := ((lang_ops X) lang_tt lang_tt). Ltac fold_lang := ra_fold lang_ops lang_tt. relation-algebra-v.1.7.9/theories/lattice.v000066400000000000000000000524371440504774100206630ustar00rootroot00000000000000(** * lattice: from preorders to Boolean lattices *) (** We define here all structures ranging from preorders to Boolean lattice (e.g., sup-semilattices, inf-semilattices with bottom element, bounded lattices, etc...). *) Require Export common level. Declare Scope ra_terms. Delimit Scope ra_terms with ra. Open Scope ra_scope. Open Scope ra_terms. (** * Lattice operations *) (** The following class packages all operations that can possibly come with a preorder: a supremum, a bottom element, a negation, etc... We use dummy values when working in structures lacking some operations. We use a "Class" rather than a "Structure" just to make the first argument of all the corresponding projections implicit. Instances of this class will be declared as Canonical structures rather than typeclasses instances, so that this first argument will actually be inferred by unification. (except in the abstract and unconstrained case, where typeclass resolution will be called since unification will keep the hole unconstrained) *) Universe L. Class ops := mk_ops { car: Type@{L}; (** carrier *) leq: relation car; (** preorder *) weq: relation car; (** underlying equality *) cup: car -> car -> car; (** supremum *) cap: car -> car -> car; (** infimum *) neg: car -> car; (** Boolean negation *) bot: car; (** bottom element *) top: car (** top element *) }. Arguments car : clear implicits. Coercion car: ops >-> Sortclass. Bind Scope ra_terms with car. (** Hints for simpl *) Arguments weq {ops} (x y)%ra / : simpl nomatch. Arguments leq {ops} (x y)%ra / : simpl nomatch. Arguments cup {ops} (x y)%ra / : simpl nomatch. Arguments cap {ops} (x y)%ra / : simpl nomatch. Arguments neg {ops} (x)%ra / : simpl nomatch. Arguments bot {ops} / : simpl nomatch. Arguments top {ops} / : simpl nomatch. (** Notations *) (** ≦ : \leqq (company coq) or LESS THAN OVER EQUAL TO (was '<==') ≡ : \equiv (company coq) or IDENTIAL TO (was '==') ⊔ : \sqcup (company coq) or SQUARE CUP (was '\cup') ⊓ : \sqcap (company coq) or SQUARE CAP (was '\cap') *) Infix "≦" := leq (at level 79): ra_scope. Infix "≡" := weq (at level 79): ra_scope. Infix "⊔" := cup (left associativity, at level 50): ra_terms. Infix "⊓" := cap (left associativity, at level 40): ra_terms. Notation "! x" := (neg x) (right associativity, at level 20, format "! x"): ra_terms. (** * Lattice laws (axioms) *) (** [laws l X] provides the laws corresponding to the various operations of [X], provided these operations belong to the level [l]. For instance, the specification of the supremum ([cup]) is available only if the level contains [CUP]. Note that [leq] is always require to be a preorder, and [weq] is always required to be the kernel of that preorder. Also note that some axioms ([leq_bx_], [leq_x_t_]) are present unless some operations are present in [l]. They end with an underscore, they are actually derivable from the other axioms when the additional operations belong to [l]. They are reformulated without the escaping disjunction below, under the same name but without the ending underscore (see [leq_bx], [leq_x_t] below). The field name [cupcap_] also ends with an underscore, this is because it's statement is an inequality, for which the converse inequality is derivable. It is thus later reformulated as an equality (see lemma [cupcap] below). Unlike for operations ([ops]), laws are actually inferred by typeclass resolution. *) Class laws (l: level) (X: ops) := { leq_PreOrder:> PreOrder leq; weq_spec : forall x y , x ≡ y <-> x ≦ y /\ y ≦ x; cup_spec {Hl:CUP ≪ l}: forall x y z, x ⊔ y ≦ z <-> x ≦ z /\ y ≦ z; cap_spec {Hl:CAP ≪ l}: forall x y z, z ≦ x ⊓ y <-> z ≦ x /\ z ≦ y; leq_bx_ {Hl:BOT ≪ l}: NEG+CAP ≪ l \/ forall x, bot ≦ x; leq_xt_ {Hl:TOP ≪ l}: NEG+CUP ≪ l \/ forall x, x ≦ top; cupcap_ {Hl:DL ≪ l}: forall x y z, (x ⊔ y) ⊓ (x ⊔ z) ≦ x ⊔ (y ⊓ z); capneg {Hl:NEG+CAP+BOT ≪ l}: forall x, x ⊓ !x ≡ bot; cupneg {Hl:NEG+CUP+TOP ≪ l}: forall x, x ⊔ !x ≡ top }. (** * Properties of the preorder ([≦]) and it kernel ([≡]) *) Lemma antisym `{laws}: forall x y, x ≦ y -> y ≦ x -> x ≡ y. Proof. intros. apply weq_spec; split; assumption. Qed. Lemma from_below `{laws}: forall x y, (forall z, z ≦ x <-> z ≦ y) -> x ≡ y. Proof. intros x y E. apply antisym; apply E; reflexivity. Qed. Lemma from_above `{laws}: forall x y, (forall z, x ≦ z <-> y ≦ z) -> x ≡ y. Proof. intros x y E. apply antisym; apply E; reflexivity. Qed. (** Trivial hints *) #[export] Hint Extern 0 (_ ≦ _) => reflexivity : core. #[export] Hint Extern 0 (_ ≡ _) => reflexivity : core. (** Instances to be used by the setoid_rewrite machinery *) #[export] Instance weq_Equivalence `{laws}: Equivalence weq. Proof. constructor. intro. now rewrite weq_spec. intros ? ?. rewrite 2weq_spec. tauto. intros x y z. rewrite 3weq_spec. intuition; etransitivity; eassumption. Qed. #[export] Instance weq_rel {ops} : RewriteRelation (@weq ops) := {}. #[export] Instance weq_leq `{laws}: subrelation weq leq. Proof. intros x y E. apply weq_spec in E as [? ?]. assumption. Qed. #[export] Instance weq_geq `{laws}: subrelation weq (flip leq). Proof. intros x y E. apply weq_spec in E as [? ?]. assumption. Qed. #[export] Instance leq_weq_iff `{laws}: Proper (weq ==> weq ==> iff) leq. Proof. intros x y E x' y' E'. split; intro L. now rewrite <-E, <-E'. now rewrite E, E'. Qed. (** Utility lemmas, to deduce that a function preserves [weq] once we proved that it preserves [leq], these are extremely useful in practice *) Lemma op_leq_weq_1 {h k X Y} {HX: laws h X} {HY: laws k Y} {f: X -> Y} {Hf: Proper (leq ==> leq) f}: Proper (weq ==> weq) f. Proof. intros x y. rewrite 2weq_spec. intro E; split; apply Hf; apply E. Qed. Lemma op_leq_weq_2 {h k l X Y Z} {HX: laws h X} {HY: laws k Y} {HZ: laws l Z} {f: X -> Y -> Z} {Hf: Proper (leq ==> leq ==> leq) f}: Proper (weq ==> weq ==> weq) f. Proof. intros x y E x' y' E'. rewrite weq_spec in E. rewrite weq_spec in E'. apply antisym; apply Hf; (apply E || apply E'). Qed. (** Additional hints, to speedup typeclass resolution *) #[export] Instance leq_Reflexive `{laws}: Reflexive leq |1. Proof. eauto with typeclass_instances. Qed. #[export] Instance leq_Transitive `{laws}: Transitive leq |1. Proof. eauto with typeclass_instances. Qed. #[export] Instance weq_Reflexive `{laws}: Reflexive weq |1. Proof. eauto with typeclass_instances. Qed. #[export] Instance weq_Transitive `{laws}: Transitive weq |1. Proof. eauto with typeclass_instances. Qed. #[export] Instance weq_Symmetric `{laws}: Symmetric weq |1. Proof. eauto with typeclass_instances. Qed. (** We declare most projections as Opaque for typeclass resolution: this saves a lot of compilation time *) (* NOTE: declaring [weq] as opaque for typeclasses also saves some time, but this is problematic with the [Prop] instances, for which we often need [weq=iff] to be used by typeclass resolution *) #[export] Typeclasses Opaque (* weq *) leq cup cap neg bot top. (** * Basic properties of [⊔], [⊓], [bot], and [top] *) Lemma leq_cupx `{laws} `{CUP ≪ l}: forall x y z, x ≦ z -> y ≦ z -> x ⊔ y ≦ z. Proof. intros. apply cup_spec. split; assumption. Qed. Lemma leq_xcup `{laws} `{CUP ≪ l}: forall x y z, z ≦ x \/ z ≦ y -> z ≦ x ⊔ y. Proof. intros x y z. assert (C:= cup_spec x y (x ⊔ y)). intros [E|E]; rewrite E; apply C; reflexivity. Qed. Lemma leq_xcap `{laws} `{CAP ≪ l}: forall x y z, z ≦ x -> z ≦ y -> z ≦ x ⊓ y. Proof. intros. apply cap_spec. split; assumption. Qed. Lemma leq_capx `{laws} `{CAP ≪ l}: forall x y z, x ≦ z \/ y ≦ z -> x ⊓ y ≦ z. Proof. intros x y z. assert (C:= cap_spec x y (x ⊓ y)). intros [E|E]; rewrite <- E; apply C; reflexivity. Qed. Lemma leq_bx `{L: laws} {Hl:BOT ≪ l}: forall x, bot ≦ x. Proof. destruct leq_bx_ as [Hl'|H]. 2: apply H. intro x. rewrite <-(capneg x). apply leq_capx. left. reflexivity. Qed. Lemma leq_xb_iff `{L: laws} {Hl:BOT ≪ l}: forall x, x ≦ bot <-> x ≡ bot. Proof. split; intro. apply antisym. assumption. apply leq_bx. now apply weq_leq. Qed. Lemma leq_xt `{L: laws} {Hl:TOP ≪ l}: forall x, x ≦ top. Proof. destruct leq_xt_ as [Hl'|H]. 2: apply H. intro x. rewrite <-(cupneg x). apply leq_xcup. left. reflexivity. Qed. Lemma leq_tx_iff `{L: laws} {Hl:TOP ≪ l}: forall x, top ≦ x <-> x ≡ top. Proof. split; intro. apply antisym. apply leq_xt. assumption. now apply weq_leq. Qed. (** * Subtyping / weakening *) (** laws that hold at any level [h] hold for all level [k ≪ h] *) Lemma lower_lattice_laws {h k} {X} {H: laws h X} {le: k ≪ h}: laws k X. Proof. constructor; try solve [ apply H | intro; apply H; eauto using lower_trans ]. intro. right. eapply @leq_bx. apply H. eauto using lower_trans. intro. right. eapply @leq_xt. apply H. eauto using lower_trans. Qed. (** * Solving (in)equations in non distributive lattices *) (** simple tactic to solve lattice (in)equations, using a basic focused proof search algorithm *) Ltac lattice := let rec async := (* idtac "async"; *) solve [ apply leq_cupx; async | apply leq_xcap; async | apply leq_bx | apply leq_xt | sync_l false || sync_r false ] with sync_l b := (* idtac "sync_l"; *) solve [ reflexivity | assumption | apply leq_capx; ((left; sync_l true) || (right; sync_l true)) | match b with true => async end ] with sync_r b := (* idtac "sync_r"; *) solve [ reflexivity | assumption | apply leq_xcup; ((left; sync_r true) || (right; sync_r true)) | match b with true => async end ] in (try apply antisym); async || fail "not a lattice theorem". (** extension of the above tactic so that it also tries to exploit hypotheses in a more agressive way *) Ltac hlattice := repeat match goal with | H: _ ≡ _ |- _ => apply weq_spec in H as [? ?] | H: _ ⊔ _ ≦ _ |- _ => apply cup_spec in H as [? ?] | H: _ ≦ _ ⊓ _ |- _ => apply cap_spec in H as [? ?] end; lattice. (** * Reasoning by duality *) (** dual lattice operations: we reverse the preorder, and swap cup with cap, and resp bot with top) *) Definition dual (X: ops) := {| leq := flip leq; weq := weq; cup := cap; cap := cup; neg := neg; bot := top; top := bot |}. Lemma capcup_ `{laws} `{DL ≪ l}: forall x y z, x ⊓ (y ⊔ z) ≦ (x ⊓ y) ⊔ (x ⊓ z). Proof. intros. rewrite <- cupcap_. apply leq_xcap. lattice. transitivity (z ⊔ x ⊓ y). 2: lattice. rewrite <- cupcap_. lattice. Qed. Ltac inverse_lower l Hl := revert Hl; clear; destruct l; unfold lower, merge; simpl; rewrite ?landb_spec; tauto. (** laws on a given set of operations can be transferred to the dual set of operations *) Lemma dual_laws l (X: ops): laws (level.dual l) X -> laws l (dual X). Proof. intro H. constructor; try (destruct l; apply H). constructor. apply H. intros x y z H1 H2. revert H2 H1. simpl. apply H. intros x y. simpl. rewrite weq_spec. tauto. intro. simpl. eapply @capcup_. apply H. inverse_lower l Hl. intro. simpl. eapply @cupneg. apply H. inverse_lower l Hl. intro. simpl. eapply @capneg. apply H. inverse_lower l Hl. Qed. (** this gives us a tactic to prove properties by lattice duality *) Lemma dualize {h} {P: ops -> Prop} (L: forall l X, laws l X -> h ≪ l -> P X) {l X} {H: laws l X} {Hl:level.dual h ≪ l}: P (dual X). Proof. apply L with (level.dual l). apply dual_laws. destruct l; apply H. revert Hl. rewrite 2lower_spec. destruct l; destruct h; simpl. tauto. Qed. Ltac dual x := apply (dualize x). (** * [(⊔,bot)] forms a commutative, idempotent monoid *) Lemma cupA `{laws} `{CUP ≪ l}: forall x y z, x ⊔ (y ⊔ z) ≡ (x ⊔ y) ⊔ z. Proof. intros. lattice. Qed. Lemma cupC `{laws} `{CUP ≪ l}: forall x y, x ⊔ y ≡ y ⊔ x. Proof. intros. lattice. Qed. Lemma cupI `{laws} `{CUP ≪ l}: forall x, x ⊔ x ≡ x. Proof. intros. lattice. Qed. Lemma cupbx `{laws} `{CUP+BOT ≪ l}: forall x, bot ⊔ x ≡ x. Proof. intros. lattice. Qed. Lemma cupxb `{laws} `{CUP+BOT ≪ l}: forall x, x ⊔ bot ≡ x. Proof. intros. lattice. Qed. Lemma cuptx `{laws} `{CUP+TOP ≪ l}: forall x, top ⊔ x ≡ top. Proof. intros. lattice. Qed. Lemma cupxt `{laws} `{CUP+TOP ≪ l}: forall x, x ⊔ top ≡ top. Proof. intros. lattice. Qed. Lemma leq_cup_l `{laws} `{CUP ≪ l} x y: x ≦ x ⊔ y. Proof. lattice. Qed. Lemma leq_cup_r `{laws} `{CUP ≪ l} x y: y ≦ x ⊔ y. Proof. lattice. Qed. #[export] Instance cup_leq `{laws} `{CUP ≪ l}: Proper (leq ==> leq ==> leq) cup. Proof. intros x x' Hx y y' Hy. lattice. Qed. #[export] Instance cup_weq `{laws} `{CUP ≪ l}: Proper (weq ==> weq ==> weq) cup. Proof. apply op_leq_weq_2. Qed. (** distribution of [⊔] over [⊓] *) Lemma cupcap `{laws} `{DL ≪ l}: forall x y z, x ⊔ (y ⊓ z) ≡ (x ⊔ y) ⊓ (x ⊔ z). Proof. intros. apply antisym. lattice. apply cupcap_. Qed. (** characterisation of the preorder by the semilattice operations *) Lemma leq_iff_cup `{laws} `{CUP ≪ l} (x y: X): x ≦ y <-> x ⊔ y ≡ y. Proof. split. intro. hlattice. intro E. rewrite <- E. lattice. Qed. (* this lemma is used twice... *) Lemma comm4 `{laws} `{CUP ≪ l} (a b c d: X): a ⊔ b ⊔ c ⊔ d ≡ (a ⊔ c) ⊔ (b ⊔ d). Proof. lattice. Qed. (** * [(⊓,top)] forms a commutative, idempotent monoid (by duality) *) Lemma capA `{laws} `{CAP ≪ l}: forall x y z, x ⊓ (y ⊓ z) ≡ (x ⊓ y) ⊓ z. Proof. dual @cupA. Qed. Lemma capC `{laws} `{CAP ≪ l}: forall x y, x ⊓ y ≡ y ⊓ x. Proof. dual @cupC. Qed. Lemma capI `{laws} `{CAP ≪ l}: forall x, x ⊓ x ≡ x. Proof. dual @cupI. Qed. Lemma captx `{laws} `{CAP+TOP ≪ l}: forall x, top ⊓ x ≡ x. Proof. dual @cupbx. Qed. Lemma capxt `{laws} `{CAP+TOP ≪ l}: forall x, x ⊓ top ≡ x. Proof. dual @cupxb. Qed. Lemma capbx `{laws} `{CAP+BOT ≪ l}: forall x, bot ⊓ x ≡ bot. Proof. dual @cuptx. Qed. Lemma capxb `{laws} `{CAP+BOT ≪ l}: forall x, x ⊓ bot ≡ bot. Proof. dual @cupxt. Qed. Lemma leq_cap_l `{laws} `{CAP ≪ l} x y: x ⊓ y ≦ x. Proof. lattice. Qed. Lemma leq_cap_r `{laws} `{CAP ≪ l} x y: x ⊓ y ≦ y. Proof. lattice. Qed. #[export] Instance cap_leq `{laws} `{CAP ≪ l}: Proper (leq ==> leq ==> leq) cap. Proof. intros x x' Hx y y' Hy. lattice. Qed. #[export] Instance cap_weq `{laws} `{CAP ≪ l}: Proper (weq ==> weq ==> weq) cap. Proof. apply op_leq_weq_2. Qed. Lemma leq_iff_cap `{laws} `{CAP ≪ l} (x y: X): x ≦ y <-> x ⊓ y ≡ x. Proof. split. intro. hlattice. intro E. rewrite <- E. lattice. Qed. Lemma capcup `{laws} `{DL ≪ l}: forall x y z, x ⊓ (y ⊔ z) ≡ (x ⊓ y) ⊔ (x ⊓ z). Proof. dual @cupcap. Qed. Lemma cupcap' `{laws} `{DL ≪ l}: forall x y z, (y ⊓ z) ⊔ x ≡ (y ⊔ x) ⊓ (z ⊔ x). Proof. intros. now rewrite cupC, cupcap, 2(cupC x). Qed. Lemma capcup' `{laws} `{DL ≪ l}: forall x y z, (y ⊔ z) ⊓ x ≡ (y ⊓ x) ⊔ (z ⊓ x). Proof. dual @cupcap'. Qed. (** * Properties of negation *) Lemma neg_unique' `{laws} `{BL ≪ l} (x y: X): y ⊓ x ≦ bot -> y ≦ !x. Proof. intros E. rewrite <-(capxt y). rewrite <-(cupneg x). rewrite capcup. rewrite E. lattice. Qed. Lemma neg_unique `{laws} `{BL ≪ l} (x y: X): top ≦ y ⊔ x -> y ⊓ x ≦ bot -> y ≡ !x. Proof. intros Ht Hb. apply antisym. now apply neg_unique'. revert Ht. dual @neg_unique'. Qed. #[export] Instance neg_leq `{laws} `{BL ≪ l}: Proper (leq --> leq) neg. Proof. intros x y E. apply neg_unique'. rewrite <-E, capC. now rewrite capneg. Qed. #[export] Instance neg_weq `{laws} `{BL ≪ l}: Proper (weq ==> weq) neg. Proof. intros x y. rewrite 2weq_spec. intro E; split; apply neg_leq, E. Qed. Lemma negneg `{laws} `{BL ≪ l} (x: X): !!x ≡ x. Proof. symmetry. apply neg_unique. now rewrite cupneg. now rewrite capneg. Qed. Lemma negbot `{laws} `{BL ≪ l}: !bot ≡ top. Proof. symmetry. apply neg_unique; lattice. Qed. Lemma negtop `{laws} `{BL ≪ l}: !top ≡ bot. Proof. dual @negbot. Qed. Lemma negcap' `{laws} `{BL ≪ l} (x y: X): !x ⊔ !y ≦ !(x ⊓ y). Proof. apply leq_cupx; apply neg_leq; lattice. Qed. Lemma negcup `{laws} `{BL ≪ l} (x y: X): !(x ⊔ y) ≡ !x ⊓ !y. Proof. apply antisym. dual @negcap'. rewrite <- (negneg x) at 2. rewrite <- (negneg y) at 2. now rewrite negcap', negneg. Qed. Lemma negcap `{laws} `{BL ≪ l} (x y: X): !(x ⊓ y) ≡ !x ⊔ !y. Proof. dual @negcup. Qed. (** switching negations *) Lemma neg_leq_iff `{laws} `{BL ≪ l} (x y: X): !x ≦ !y <-> y ≦ x. Proof. split. intro E. apply neg_leq in E. now rewrite 2negneg in E. apply neg_leq. Qed. Lemma neg_leq_iff' `{laws} `{BL ≪ l} (x y: X): x ≦ !y <-> y ≦ !x. Proof. now rewrite <- neg_leq_iff, negneg. Qed. Lemma neg_leq_iff'' `{laws} `{BL ≪ l} (x y: X): !x ≦ y <-> !y ≦ x. Proof. now rewrite <- neg_leq_iff, negneg. Qed. Lemma neg_weq_iff `{laws} `{BL ≪ l} (x y: X): !x ≡ !y <-> y ≡ x. Proof. now rewrite 2weq_spec, 2neg_leq_iff. Qed. Lemma neg_weq_iff' `{laws} `{BL ≪ l} (x y: X): x ≡ !y <-> !x ≡ y. Proof. now rewrite <-neg_weq_iff, negneg. Qed. Lemma neg_weq_iff'' `{laws} `{BL ≪ l} (x y: X): !x ≡ y <-> x ≡ !y. Proof. now rewrite <-neg_weq_iff, negneg. Qed. Ltac neg_switch := first [ rewrite neg_leq_iff | rewrite neg_leq_iff' | rewrite neg_leq_iff'' | rewrite <-neg_leq_iff | rewrite neg_weq_iff | rewrite neg_weq_iff' | rewrite neg_weq_iff'' | rewrite <-neg_weq_iff ]. Lemma leq_cap_neg `{laws} `{BL ≪ l} (x y: X): y ≦ x <-> y ⊓ !x ≦ bot. Proof. split. intro E. now rewrite E, capneg. intro E. now rewrite (neg_unique' _ _ E), negneg. Qed. Lemma leq_cap_neg' `{laws} `{BL ≪ l} (x y: X): y ⊓ x ≦ bot <-> y ≦ !x. Proof. rewrite <-(negneg x) at 1. symmetry. apply leq_cap_neg. Qed. Lemma leq_cup_neg `{laws} `{BL ≪ l} (x y: X): x ≦ y <-> top ≦ y ⊔ !x. Proof. dual @leq_cap_neg. Qed. Lemma leq_cup_neg' `{laws} `{BL ≪ l} (x y: X): top ≦ y ⊔ x -> !x ≦ y. Proof. dual @leq_cap_neg'. Qed. (** * Morphisms *) (** an [l]-morphism betwen to sets of operations are defined as expected: the function is requried to preserve only the operations listed in [l] *) Class morphism l {X Y: ops} (f: X -> Y) := { fn_leq: Proper (leq ==> leq) f; fn_weq: Proper (weq ==> weq) f; fn_cup {Hl:CUP ≪ l}: forall x y, f (x ⊔ y) ≡ f x ⊔ f y; fn_cap {Hl:CAP ≪ l}: forall x y, f (x ⊓ y) ≡ f x ⊓ f y; fn_bot {Hl:BOT ≪ l}: f bot ≡ bot; fn_top {Hl:TOP ≪ l}: f top ≡ top; fn_neg {Hl:NEG ≪ l}: forall x, f (!x) ≡ !(f x) }. (** generating a structure by injective embedding *) Lemma laws_of_injective_morphism {h l X Y} {L: laws h Y} {Hl: l ≪ h} f: @morphism l X Y f -> (forall x y, f x ≦ f y -> x ≦ y) -> (forall x y, f x ≡ f y -> x ≡ y) -> laws l X. Proof. intros Hf Hleq Hweq. apply (@lower_lattice_laws _ _ _ L) in Hl. clear L. assert (Hleq_iff: forall x y, f x ≦ f y <-> x ≦ y). split. apply Hleq. apply fn_leq. assert (Hweq_iff: forall x y, f x ≡ f y <-> x ≡ y). split. apply Hweq. apply fn_weq. constructor. constructor. intro. apply Hleq. reflexivity. intros x y z. rewrite <-3Hleq_iff. apply Hl. intros. now rewrite <-Hweq_iff, weq_spec, 2Hleq_iff. intros. rewrite <-3Hleq_iff, fn_cup. apply cup_spec. intros. rewrite <-3Hleq_iff, fn_cap. apply cap_spec. right. intros. apply Hleq. rewrite fn_bot. apply leq_bx. right. intros. apply Hleq. rewrite fn_top. apply leq_xt. intros. apply Hleq. rewrite fn_cup, fn_cap, 2fn_cup, fn_cap. apply cupcap_. intros. rewrite <-Hweq_iff. rewrite fn_cap, fn_neg, fn_bot. apply capneg. intros. rewrite <-Hweq_iff. rewrite fn_cup, fn_neg, fn_top. apply cupneg. Qed. (** * Pointwise extension of a structure *) Definition pw0 {Y X} (f: X) (y: Y) := f. Definition pw1 {Y X} (f: X -> X) (u: Y -> X) (y: Y) := f (u y). Definition pw2 {Y X} (f: X -> X -> X) (u v: Y -> X) (y: Y) := f (u y) (v y). Arguments pw0 {Y X} _ _ /. Arguments pw1 {Y X} _ _ _ /. Arguments pw2 {Y X} _ _ _ _ /. Universe pw. (** As explained above, we use canonical structures for operations inference *) Canonical Structure pw_ops (X: ops) (Y : Type@{pw}) : ops := {| car := Y -> X; leq := pwr leq; weq := pwr weq; cup := pw2 cup; cap := pw2 cap; neg := pw1 neg; bot := pw0 bot; top := pw0 top |}. (** In contrast, we use typeclass resolution for laws inference. Note the level polymorphism in the instance below: laws of level [l] on [X] yield laws of the same level [l] on [Y -> X]. *) #[export] Instance pw_laws `{laws} (Y : Type@{pw}) : laws l (pw_ops X Y). Proof. constructor; simpl; intros. constructor. intros f x. reflexivity. intros f g h ? ? x. now transitivity (g x). setoid_rewrite weq_spec. firstorder. setoid_rewrite cup_spec. firstorder. setoid_rewrite cap_spec. firstorder. right; intros; apply leq_bx. right; intros; apply leq_xt. apply cupcap_. apply capneg. apply cupneg. Qed. (** trick to factorise code in various tactics: make the choice between [leq] or [weq] first-class *) Definition leq_or_weq (b: bool) := if b then @leq else @weq. Arguments leq_or_weq _ {_} (_ _)%ra. Notation "x <=[ b ]= y" := (leq_or_weq b x y) (at level 79): ra_scope. Lemma leq_or_weq_weq `{laws} b: Proper (weq ==> weq ==> iff) (leq_or_weq b). Proof. unfold leq_or_weq; case b; eauto with typeclass_instances. Qed. relation-algebra-v.1.7.9/theories/level.v000066400000000000000000000140401440504774100203310ustar00rootroot00000000000000(** * level: tuples of Booleans identifying a point in the algebraic hierarchy *) Require Import common. (** a level specifies which relation algebra operations are considered the interpretation in the model of binary relations is given on the right-hand side. *) Record level := mk_level { has_cup: bool; (** set theoretic union *) has_bot: bool; (** empty relation *) has_cap: bool; (** set theoretic intersection *) has_top: bool; (** full relation *) has_str: bool; (** reflexive transitive closure *) has_cnv: bool; (** converse, or transpose *) has_neg: bool; (** Boolean negation *) has_div: bool (** residuals, or factors *) }. Declare Scope level_scope. Bind Scope level_scope with level. Delimit Scope level_scope with level. (** dual level, for symmetry arguments related to lattices *) Definition dual l := mk_level (has_cap l) (has_top l) (has_cup l) (has_bot l) (has_str l) (has_cnv l) (has_neg l) (has_div l). (** * Level constraints *) (** [lower k k'], or [k ≪ k'], denotes the fact that there are less operations/axioms at level [k] than at level [k'] *) Class lower (k k': level) := mk_lower: let 'mk_level a b c d e f g h := k in let 'mk_level a' b' c' d' e' f' g' h' := k' in is_true (a<< (has_cup h -> has_cup k) /\ (has_bot h -> has_bot k) /\ (has_cap h -> has_cap k) /\ (has_top h -> has_top k) /\ (has_str h -> has_str k) /\ (has_cnv h -> has_cnv k) /\ (has_neg h -> has_neg k) /\ (has_div h -> has_div k). Proof. destruct h; destruct k. unfold lower. rewrite !landb_spec, !le_bool_spec. reflexivity. Qed. (** [≪] is a preorder *) #[export] Instance lower_refl: Reflexive lower. Proof. intro. setoid_rewrite lower_spec. tauto. Qed. #[export] Instance lower_trans: Transitive lower. Proof. intros h k l. setoid_rewrite lower_spec. tauto. Qed. (** * Merging levels *) (** merging two levels: taking the union of their supported operations *) Definition merge h k := mk_level (has_cup h ||| has_cup k) (has_bot h ||| has_bot k) (has_cap h ||| has_cap k) (has_top h ||| has_top k) (has_str h ||| has_str k) (has_cnv h ||| has_cnv k) (has_neg h ||| has_neg k) (has_div h ||| has_div k). Infix "+" := merge: level_scope. Arguments merge _ _: simpl never. (** [merge] is a supremum for the [≪] preorder *) Lemma merge_spec h k l: h+k ≪ l <-> h ≪ l /\ k ≪ l. Proof. setoid_rewrite lower_spec. setoid_rewrite lorb_spec. tauto. Qed. Lemma lower_xmerge h k l: l ≪ h \/ l ≪ k -> l ≪ (h + k). Proof. assert (C:= merge_spec h k (h+k)). intros [E|E]; (eapply lower_trans; [eassumption|]); apply C, lower_refl. Qed. Lemma lower_mergex h k l: h ≪ l -> k ≪ l -> h+k ≪ l. Proof. rewrite merge_spec. tauto. Qed. #[export] Instance merge_lower: Proper (lower ==> lower ==> lower) merge. Proof. intros h k H h' k' H'. apply lower_mergex; apply lower_xmerge; auto. Qed. (** * Tactics for level constraints resolution *) (** simple but efficient tactic, this is the one used by default, we give it as a hint for maximally inserted arguments (typeclasses resolution) *) Ltac solve_lower := solve [ exact eq_refl (* trivial constraint (on closed levels) *) | eassumption (* context assumption *) | repeat match goal with | H: ?h ≪ ?l , H': ?k ≪ ?l |- _ ≪ ?l => (* merge assumptions about [l] *) apply (lower_mergex h k l H) in H'; clear H | H: ?k ≪ ?l |- ?h ≪ _ => (* use assumptions by transitivity *) apply (lower_trans h k l eq_refl H) end ] || fail "could not prove this entailment". #[export] Hint Extern 0 (_ ≪ _) => solve_lower: typeclass_instances. (** heavier and more complete tactic, which we use in a selfdom way *) Ltac solve_lower' := solve [ (repeat match goal with H: _ + _ ≪ _ |- _ => apply merge_spec in H as [? ?] end); (repeat apply lower_mergex); auto 100 using lower_xmerge, lower_refl ] || fail "could not prove this entailment". (** tactic used to discriminate unsatisfiable level constraint *) Ltac discriminate_levels := solve [ intros; repeat discriminate || match goal with | H: _ + _ ≪ _ |- _ => apply merge_spec in H as [? ?] end ]. (** * Concrete levels *) Section levels. Notation "1" := true. Notation "0" := false. (** atoms *) Definition MIN := mk_level 0 0 0 0 0 0 0 0. Definition CUP := mk_level 1 0 0 0 0 0 0 0. Definition BOT := mk_level 0 1 0 0 0 0 0 0. Definition CAP := mk_level 0 0 1 0 0 0 0 0. Definition TOP := mk_level 0 0 0 1 0 0 0 0. Definition STR := mk_level 0 0 0 0 1 0 0 0. Definition CNV := mk_level 0 0 0 0 0 1 0 0. Definition NEG := mk_level 0 0 0 0 0 0 1 0. Definition DIV := mk_level 0 0 0 0 0 0 0 1. Local Open Scope level_scope. (** points of particular interest (i.e., corresponding to standard mathematical structures) *) Definition SL := CUP. Definition DL := Eval compute in CUP+CAP. Definition BSL := Eval compute in SL+BOT. Definition BDL := Eval compute in DL+BOT+TOP. Definition BL := Eval compute in BDL+NEG. Definition KA := Eval compute in SL+STR. Definition AA := Eval compute in DL+STR. Definition AL := Eval compute in CAP+CNV. Definition DAL := Eval compute in DL+CNV. Definition BKA := Eval compute in KA+BOT. Definition CKA := Eval compute in KA+CNV. End levels. (* sanity checks for the [solve_lower] tactic *) (* Goal forall l, CUP ≪ l -> AL ≪ l -> CNV+CUP ≪ l. intros. solve_lower || fail "bad". Abort. Goal forall l, KA ≪ l -> AL ≪ l -> CNV+CUP ≪ l. intros. solve_lower || fail "bad". Abort. Goal forall l, CAP ≪ l -> AL ≪ l -> CNV+CUP ≪ l. intros. Fail solve_lower. Abort. *) relation-algebra-v.1.7.9/theories/lset.v000066400000000000000000000106131440504774100201730ustar00rootroot00000000000000(** * lset: finite sets represented as lists *) (** This module is used quite intensively, as finite sets are pervasives (free variables, summations, partial derivatives...): We implement finite sets as simple lists, without any additional structure: this allows for very simple operations and specifications, without the need for keeping well-formedness hypotheses around. When a bit of efficiency is required, we use sorted listed without duplicates as a special case (but without ensuring that these lists are sorted: we managed to avoid such a need in our proofs). We declare these finite sets a sup-semilattice with bottom element, allowing us to use the lattice tactics and theorems in a transparent way. *) Require Import lattice comparisons. Require Export List. Export ListNotations. Set Implicit Arguments. (** * semi-lattice of finite sets as simple lists *) (** two lists are equal when they contain the same elements, independently of their position or multiplicity *) Universe lset. Canonical Structure lset_ops (A:Type@{lset}) := lattice.mk_ops (list A) (fun h k => forall a, In a h -> In a k) (fun h k => forall a, In a h <-> In a k) (@app A) (@app A) (assert_false id) (@nil A) (@nil A). (** the fact that this makes a semi-lattice is almost trivial *) #[export] Instance lset_laws (A:Type@{lset}) : lattice.laws BSL (lset_ops A). Proof. constructor; simpl; try discriminate. firstorder. firstorder. setoid_rewrite in_app_iff. firstorder. firstorder. Qed. (** the [map] function on lists is actually a monotone function over the represented sets *) #[export] Instance map_leq A B (f: A -> B): Proper (leq ==> leq) (map f). Proof. intro h. induction h as [|a h IH]; intros k H. apply leq_bx. intros i [<-|I]. apply in_map. apply H. now left. apply IH. 2: assumption. intros ? ?. apply H. now right. Qed. #[export] Instance map_weq A B (f: A -> B): Proper (weq ==> weq) (map f) := op_leq_weq_1. (** [map] is extensional *) #[export] Instance map_compat A B: Proper (pwr eq ==> eq ==> eq) (@map A B). Proof. intros f g H h k <-. apply map_ext, H. Qed. (** belonging to a singleton *) Lemma in_singleton A (x y: A): In x [y] <-> y=x. Proof. simpl. tauto. Qed. (** the following tactic replaces all occurrences of [cons] with degenerated concatenations, so that the [lattice] can subsequently handle them *) Ltac fold_cons := repeat match goal with | |- context[@cons ?A ?x ?q] => (constr_eq q (@nil A); fail 1) || change (x::q) with ([x]++q) | H: context[!cons ?A ?x ?q] |- _ => (constr_eq q (@nil A); fail 1) || change (x::q) with ([x]++q) in H end. (** * sorted lists without duplicates *) (** when the elements come with a [cmpType] structure, one can perform sorted lists operations *) Section m. Context {A: cmpType}. (** sorted merge of sorted lists *) Fixpoint union (l1: list A) := match l1 with | nil => fun l2 => l2 | h1::t1 => let fix union' l2 := match l2 with | nil => l1 | h2::t2 => match cmp h1 h2 with | Eq => h1::union t1 t2 | Lt => h1::union t1 l2 | Gt => h2::union' t2 end end in union' end. (** sorted insertion in a sorted list *) Fixpoint insert (i: A) l := match l with | nil => i::nil | j::q => match cmp i j with | Eq => l | Lt => i::l | Gt => j::insert i q end end. (** weak specification: [union] actually performs an union *) Lemma union_app: forall h k, union h k ≡ h ++ k. Proof. induction h as [|x h IHh]; simpl union. reflexivity. induction k as [|y k IHk]. lattice. case cmp_spec. intros ->. fold_cons. rewrite IHh. lattice. intros _. fold_cons. rewrite IHh. lattice. intros _. fold_cons. rewrite IHk. lattice. Qed. (** and [insert] actually performs an insertion *) Lemma insert_union: forall i l, insert i l = union [i] l. Proof. induction l; simpl. reflexivity. case cmp_spec. congruence. reflexivity. now rewrite IHl. Qed. Lemma insert_app: forall i l, insert i l ≡ [i] ++ l. Proof. intros. rewrite insert_union. apply union_app. Qed. End m. Module Fix. Canonical Structure lset_ops A := lattice.mk_ops (list A) (fun h k => forall a, In a h -> In a k) (fun h k => forall a, In a h <-> In a k) (@app A) (@app A) (assert_false id) (@nil A) (@nil A). End Fix. relation-algebra-v.1.7.9/theories/lsyntax.v000066400000000000000000000140331440504774100207260ustar00rootroot00000000000000(** * lsyntax: syntactic model for flat structures (lattice operations) *) Require Export positives. Require Import comparisons lattice lset sups. Set Implicit Arguments. (** * Free syntactic model *) Section s. Variable A: Set. (* [A = ord n] in KAT proofs [A = positive] in reification, (and possibly in computations in the future) *) (** Boolean lattice expressions over a set [A] of variables *) Inductive expr := | e_bot | e_top | e_cup (e f: expr) | e_cap (e f: expr) | e_neg (e: expr) | e_var (a: A). (** level of an expression: the set of operations that appear in that expression *) Fixpoint e_level e := match e with | e_bot => BOT | e_top => TOP | e_cup x y => CUP + e_level x + e_level y | e_cap x y => CAP + e_level x + e_level y | e_neg x => BL + e_level x (* negation is ill-defined without the other Boolean operations, whence the [BL] rather than [NEG] *) | e_var _ => MIN end%level. Section e. Context {X: ops}. Variable f: A -> X. (** interpretation of an expression into an arbitray Boolean lattice structure, given an assignation [f] of the variables *) Fixpoint eval e: X := match e with | e_bot => bot | e_top => top | e_cup x y => eval x ⊔ eval y | e_cap x y => eval x ⊓ eval y | e_neg x => ! eval x | e_var a => f a end. End e. Section l. Variable l: level. (** * (In)equality of syntactic expressions. Rather than defining (in)equality of syntactic expressions as inductive predicates, we exploit the standard impredicative encoding of such predicates: two expressions are equal (resp., lower or equal) iff they are equal (resp., lower or equal) under any interpretation. These definitions are parametrised by the level [l] at which one wants to interpret the expressions: this allows us to capture once and for all the equational theories of each flat structure. *) Definition e_leq (x y: expr) := forall X (L: laws l X) (f: A -> X), eval f x ≦ eval f y. Definition e_weq (x y: expr) := forall X (L: laws l X) (f: A -> X), eval f x ≡ eval f y. (** by packing syntactic expressions and the above predicates into a canonical structure, we get all notations for free *) Canonical Structure expr_ops := {| car := expr; leq := e_leq; weq := e_weq; cup := e_cup; cap := e_cap; neg := e_neg; bot := e_bot; top := e_top |}. (** we easily show that we get a model so that we immediately benefit from all lemmas about flat structures *) Global Instance expr_laws: laws l expr_ops. Proof. constructor; try right. constructor. intros x X L f. reflexivity. intros x y z H H' X L f. transitivity (eval f y); auto. intros x y. split. intro H. split; intros X L f. now apply weq_leq, H. now apply weq_geq, H. intros [H H'] X L f. apply antisym; auto. intros Hl x y z. split. intro H. split; intros X L f; specialize (H X L f); simpl in H; hlattice. intros [H H'] X L f. simpl. apply cup_spec; auto. intros Hl x y z. split. intro H. split; intros X L f; specialize (H X L f); simpl in H; hlattice. intros [H H'] X L f. simpl. apply cap_spec; auto. intros x X L f. apply leq_bx. intros x X L f. apply leq_xt. intros Hl x y z X L f. apply cupcap_. intros Hl x X L f. apply capneg. intros Hl x X L f. apply cupneg. Qed. (** the interpretation function is an homomorphism, so that it preserves all finite sups and infs *) Lemma eval_sup I (J: list I) (f: I -> expr) (X: lattice.ops) (g: A -> X): eval g (sup (X:=expr_ops) f J) = \sup_(i\in J) eval g (f i). Proof. apply f_sup_eq; now f_equal. Qed. Lemma eval_inf I (J: list I) (f: I -> expr) (X: lattice.ops) (g: A -> X): eval g (sup (X:=dual expr_ops) f J) = \inf_(i\in J) eval g (f i). Proof. apply (f_sup_eq (Y:=dual X)); now f_equal. Qed. (** [e_var] is a unit for the underlying monad *) Lemma eval_var (e: expr_ops): eval e_var e = e. Proof. induction e; simpl; congruence. Qed. End l. End s. Arguments e_var [A] a. Arguments e_bot {A}. Arguments e_top {A}. Declare Scope last_scope. Bind Scope last_scope with expr. Delimit Scope last_scope with last. (** additional notations, to specify explicitly at which level expressions are considered, or to work directly with the bare constructors (by opposition with the encapsulated ones, through lattice.ops)*) Notation expr_ l := (car (expr_ops _ l)). Notation "x <==_[ l ] y" := (@leq (expr_ops _ l) x%last y%last) (at level 79): ra_scope. Notation "x ==_[ l ] y" := (@weq (expr_ops _ l) x%last y%last) (at level 79): ra_scope. Infix "⊔" := e_cup: last_scope. Infix "⊓" := e_cap: last_scope. Notation "! x" := (e_neg x): last_scope. (** * Comparing expressions *) (** we get a [cmpType] on expressions if the set of variable is such (currently used in ugregex_dec) *) Section expr_cmp. Context {A: cmpType}. Fixpoint expr_compare (x y: expr A) := match x,y with | e_bot, e_bot | e_top, e_top => Eq | e_var a, e_var b => cmp a b | e_cup x x', e_cup y y' | e_cap x x', e_cap y y' => lex (expr_compare x y) (expr_compare x' y') | e_neg x, e_neg y => expr_compare x y | e_bot, _ => Lt | _, e_bot => Gt | e_top, _ => Lt | _, e_top => Gt | e_var _, _ => Lt | _, e_var _ => Gt | e_cup _ _, _ => Lt | _, e_cup _ _ => Gt | e_cap _ _, _ => Lt | _, e_cap _ _ => Gt end. Lemma expr_compare_spec: forall x y, compare_spec (x=y) (expr_compare x y). Proof. induction x; destruct y; try (constructor; congruence). - eapply lex_spec; eauto. intuition congruence. - eapply lex_spec; eauto. intuition congruence. - simpl; case IHx; constructor; congruence. - simpl; case cmp_spec; constructor; congruence. Qed. Canonical Structure cmp_expr := mk_simple_cmp _ expr_compare_spec. (** variables appearing in an expression ([A] needs to be a [cmpType] so that the resulting list is without duplicates) *) Fixpoint vars (e: expr A): list A := match e with | e_bot | e_top => [] | e_cup x y | e_cap x y => union (vars x) (vars y) | e_neg x => vars x | e_var x => [x] end. End expr_cmp. relation-algebra-v.1.7.9/theories/matrix.v000066400000000000000000000526771440504774100205500ustar00rootroot00000000000000(** * matrix: constructing matrices over the various typed structures Given an [l]-monoid structure, we build an [l]-monoid of matrices above it. This works whenever we have unions and bottom elements ([BSL ≪ l]) for structures without residuals; to build residuals, we moreover need intersections and top elements ([BDL ≪ l]). We do these constructions once and forall, thanks to our first-class level constraints. We can build rectangular matrices, and not only square ones, thanks to our typed structures: [MX n m] denotes the set of [(n,m)]-matrices. *) Require Export comparisons. Require Import kleene sums normalisation. Import lset Fix ordinal. (** A matrix of size [(n,m)] over a set [X] is just a curried function from indices ([ord n ⋅ ord m]) to [X] *) Definition mx X n m := ord n -> ord m -> X. (** * [(n,m)]-matrices as a lattice *) (** when X is a lattice, matrix lattice operations and laws are obtained for free, by two successive pointwise liftings of X *) Canonical Structure mx_lattice_ops (X: lattice.ops) n m := lattice.mk_ops (mx X n m) leq weq cup cap neg bot top. #[export] Instance mx_lattice_laws `{L: lattice.laws} n m: lattice.laws l (mx_lattice_ops X n m). Proof. apply pw_laws. Qed. (** supremums (or sums) are computed pointwise *) Lemma mx_sup `{X: lattice.ops} n m I J (f: I -> mx X n m) i j: (\sup_(x\in J) f x) i j = \sup_(x\in J) f x i j. Proof. apply (f_sup_eq (fun x: mx X n m => x i j)); now f_equal. Qed. Section d. Context {X: Type}. Notation mx := (mx X). (** ** scalar (1,1)-matrices *) Definition scal_mx x: mx 1 1 := fun _ _ => x. Definition mx_scal (M: mx 1 1) := M ord0 ord0. (** ** block matrix operations *) (** following ssreflect methodology, we decompose the standard block matrix construction, with four quadrants, into two constructions, for column and line block matrices. *) Definition col_mx {n1 n2 m} (M1: mx n1 m) (M2: mx n2 m): mx (n1+n2) m := fun i j => match split i with | inl i1 => M1 i1 j | inr i2 => M2 i2 j end. Definition row_mx {n m1 m2} (M1: mx n m1) (M2: mx n m2): mx n (m1+m2) := fun i j => match split j with | inl j1 => M1 i j1 | inr j2 => M2 i j2 end. Definition blk_mx {n1 n2 m1 m2} (A: mx n1 m1) (B: mx n1 m2) (C: mx n2 m1) (D: mx n2 m2) := col_mx (row_mx A B) (row_mx C D). Definition tsub_mx {n1 n2 m} (M: mx (n1+n2) m): mx n1 m := fun i j => M (lshift i) j. Definition bsub_mx {n1 n2 m} (M: mx (n1+n2) m): mx n2 m := fun i j => M (rshift i) j. Definition lsub_mx {n m1 m2} (M: mx n (m1+m2)): mx n m1 := fun i j => M i (lshift j). Definition rsub_mx {n m1 m2 }(M: mx n (m1+m2)): mx n m2 := fun i j => M i (rshift j). Definition sub00_mx {n1 n2 m1 m2} (M: mx (n1+n2) (m1+m2)) := tsub_mx (lsub_mx M). Definition sub01_mx {n1 n2 m1 m2} (M: mx (n1+n2) (m1+m2)) := tsub_mx (rsub_mx M). Definition sub10_mx {n1 n2 m1 m2} (M: mx (n1+n2) (m1+m2)) := bsub_mx (lsub_mx M). Definition sub11_mx {n1 n2 m1 m2} (M: mx (n1+n2) (m1+m2)) := bsub_mx (rsub_mx M). End d. (** all block matrix operations are monotone *) #[export] Instance col_mx_leq `{L: lattice.laws} n1 n2 m: Proper (leq ==> leq ==> leq) (@col_mx X n1 n2 m). Proof. intros ? ? H ? ? H' i j. unfold col_mx. case split_spec; intros ? ->. apply H. apply H'. Qed. #[export] Instance col_mx_weq `{L: lattice.laws} n1 n2 m: Proper (weq ==> weq ==> weq) (@col_mx X n1 n2 m) := op_leq_weq_2. Lemma col_mx_leq_iff `{L: lattice.laws} n1 n2 m M1 M2 N1 N2: @col_mx X n1 n2 m M1 M2 ≦ col_mx N1 N2 <-> M1 ≦N1 /\ M2 ≦N2. Proof. split. unfold col_mx. intro H. split; intros i j. generalize (H (lshift i) j). now rewrite split_lshift. generalize (H (rshift i) j). now rewrite split_rshift. intros [? ?]. now apply col_mx_leq. Qed. #[export] Instance row_mx_leq `{L: lattice.laws} n m1 m2: Proper (leq ==> leq ==> leq) (@row_mx X n m1 m2). Proof. intros ? ? H ? ? H' i j. unfold row_mx. case split_spec; intros i' ->. apply H. apply H'. Qed. #[export] Instance row_mx_weq `{L: lattice.laws} n m1 m2: Proper (weq ==> weq ==> weq) (@row_mx X n m1 m2) := op_leq_weq_2. Lemma row_mx_leq_iff `{L: lattice.laws} n m1 m2 M1 M2 N1 N2: @row_mx X n m1 m2 M1 M2 ≦ row_mx N1 N2 <-> M1 ≦N1 /\ M2 ≦N2. Proof. split. unfold row_mx. intro H. split; intros i j. generalize (H i (lshift j)). now rewrite split_lshift. generalize (H i (rshift j)). now rewrite split_rshift. intros [? ?]. now apply row_mx_leq. Qed. #[export] Instance blk_mx_leq `{L: lattice.laws} n1 n2 m1 m2: Proper (leq ==> leq ==> leq ==> leq ==> leq) (@blk_mx X n1 n2 m1 m2). Proof. do 12 intro. now apply col_mx_leq; apply row_mx_leq. Qed. #[export] Instance blk_mx_weq `{L: lattice.laws} n1 n2 m1 m2: Proper (weq ==> weq ==> weq ==> weq ==> weq) (@blk_mx X n1 n2 m1 m2). Proof. do 12 intro. now apply col_mx_weq; apply row_mx_weq. Qed. Lemma blk_mx' `{L: lattice.laws} {n1 n2 m1 m2} A B C D: @blk_mx X n1 n2 m1 m2 A B C D ≡ row_mx (col_mx A C) (col_mx B D). Proof. intros i j. unfold blk_mx, row_mx, col_mx. case split; case split; intros; reflexivity. Qed. Lemma to_col_mx `{L: lattice.laws} {n1 n2 m} (M: mx X (n1+n2) m): M ≡ col_mx (tsub_mx M) (bsub_mx M). Proof. intros i j. unfold col_mx, tsub_mx, bsub_mx. now case split_spec; intros i' ->. Qed. Lemma to_row_mx `{L: lattice.laws} {n m1 m2} (M: mx X n (m1+m2)): M ≡ row_mx (lsub_mx M) (rsub_mx M). Proof. intros i j. unfold row_mx, lsub_mx, rsub_mx. now case split_spec; intros i' ->. Qed. Lemma to_blk_mx `{L: lattice.laws} {n1 n2 m1 m2} (M: mx X (n1+n2) (m1+m2)): M ≡ blk_mx (sub00_mx M) (sub01_mx M) (sub10_mx M) (sub11_mx M). Proof. rewrite to_row_mx at 1. rewrite to_col_mx. reflexivity. Qed. Lemma col_mx_cup `{L: lattice.laws} n1 n2 m M M' N N': @col_mx X n1 n2 m (M ⊔ M') (N ⊔ N') ≡ col_mx M N ⊔ col_mx M' N'. Proof. intros i j. unfold col_mx. simpl. case split; reflexivity. Qed. Lemma row_mx_cup `{L: lattice.laws} n m1 m2 M M' N N': @row_mx X n m1 m2 (M ⊔ M') (N ⊔ N') ≡ row_mx M N ⊔ row_mx M' N'. Proof. intros i j. unfold row_mx. simpl. case split; reflexivity. Qed. (** * [(n,m)]-matrices as a monoid *) (** when [X] is at least an idempotent semiring ([BSL]), the set of matrices has a monoid structure *) (** Note: since the underlying monoid ([X]) is typed a priori, we could do a much more general matrix construction, using heavily dependent types. We do not need it since we actually construct matrices on [bool], [Prop], and [regex], which are all flat monoids. Therefore, we simply fix a object [u] of [X], and we construct matrices on [X u u]. ([u] will be instantiated with the unique objects of the aformentioned models.) *) Section m. Variable X: ops. Variable u: ob X. Notation U := (car (@mor X u u)). Notation mx := (mx U). (** identity matrix *) Definition mx_one n: mx n n := fun i j => ofbool (eqb_ord i j). (** matrix product *) Definition mx_dot n m p (M: mx n m) (N: mx m p): mx n p := fun i k => \sum_(j \inf_(i \inf_(i (M j i)°. (** Kleene star of a matrix, defined inductively, by block matrix constructions *) (** we follow standard textbooks and papers, except that we define first an auxiliary function, which we iterate to get the final matrix construction: this allows us to state easily that the actual block decomposition used doesn't matter [matrix_ext.mx_str_blk] *) Definition mx_str_build n m (sn: mx n n -> mx n n) (sm: mx m m -> mx m m) (M: mx (n+m) (n+m)): mx (n+m) (n+m) := let a := sub00_mx M in let b := sub01_mx M in let c := sub10_mx M in let d := sub11_mx M in let e := sm d in let be:= b**e in let ec:= e**c in let f := sn (a ⊔ be**c) in let fbe := f**be in let ecf := ec**f in blk_mx f fbe ecf (e ⊔ ecf**be). Fixpoint mx_str n: mx n n -> mx n n := match n with | O => fun M => M | S n => mx_str_build 1 n (fun M => scal_mx ((mx_scal M)^*)) (mx_str n) end. (** strict iteration is derived from Kleene star *) Definition mx_itr n M := M ** mx_str n M. (** packing all operations as a canonical structure *) Canonical Structure mx_ops := mk_ops nat _ mx_dot mx_one mx_itr mx_str mx_cnv mx_ldv mx_rdv. End m. (** ** matrices form a BSL-monoid *) (** we prove that the matrix constructions are correct in two steps: we first get the BSL-structure (idempotent semirings), and then we add the laws corresponding to the other operations. This allows us to benefit from tools about idempotent semirings for the latter proofs, notably for Kleene star. *) Section bsl. Context `{L: laws} `{Hl: BSL ≪ l} {u: ob X}. Notation U := (car (@mor X u u)). Notation mx := (mx U). (** matrix product is associative *) Lemma mx_dotA n m p q (M: mx n m) N (P: mx p q): M⋅(N⋅P) ≡ (M⋅N)⋅P. Proof. intros i j. simpl. unfold mx_dot. setoid_rewrite dotxsum. rewrite sup_swap. apply sup_weq; trivial. intro. rewrite dotsumx. apply sup_weq; trivial. intro. apply dotA. Qed. (** and admits identities as left and right units *) Lemma mx_dot1x n m (M: mx n m): 1⋅M ≡ M. Proof. intros i j. simpl. unfold mx_dot, mx_one. apply antisym. apply leq_supx. intros i' _. case eqb_ord_spec; simpl. intros <-. apply weq_leq, dot1x. intros _. rewrite dot0x. apply leq_bx. rewrite <- (leq_xsup _ _ i) by apply in_seq. rewrite eqb_refl. simpl. now rewrite dot1x. Qed. Lemma mx_dotx1 n m (M: mx m n): M⋅1 ≡ M. Proof. intros i j. simpl. unfold mx_dot, mx_one. apply antisym. apply leq_supx. intros i' _. case eqb_ord_spec; simpl. intros <-. apply weq_leq, dotx1. intros _. rewrite dotx0. apply leq_bx. rewrite <- (leq_xsup _ _ j) by apply in_seq. rewrite eqb_refl. simpl. now rewrite dotx1. Qed. (** matrix product is monotone *) Lemma mx_dot_leq n m p: Proper (leq ==> leq ==> leq) (mx_dot X u n m p). Proof. intros ? ? H ? ? H' i j. apply sup_leq; trivial. intro k. apply dot_leq. apply H. apply H'. Qed. (** matrix product distributes over the sup-semilattice structure *) Lemma mx_dotplsx_ n m p (M N: mx n m) (P: mx m p): (M+N)⋅P ≦ M⋅P+N⋅P. Proof. intros i j. simpl. unfold mx_dot. setoid_rewrite dotplsx. now rewrite supcup. Qed. Lemma mx_dotxpls_ n m p (M N: mx m n) (P: mx p m): P⋅(M+N) ≦ P⋅M+P⋅N. Proof. intros i j. simpl. unfold mx_dot. setoid_rewrite dotxpls. now rewrite supcup. Qed. Lemma mx_dot0x_ n m p (P: mx m p): (zer n m)⋅P ≦ 0. Proof. intros i j. simpl. unfold mx_dot. setoid_rewrite dot0x. apply leq_supx. intros. apply leq_bx. Qed. Lemma mx_dotx0_ n m p (P: mx p m): P⋅(zer m n) ≦ 0. Proof. intros i j. simpl. unfold mx_dot. setoid_rewrite dotx0. apply leq_supx. intros. apply leq_bx. Qed. (** packing everything, we get a [BSL]-monoid structure *) Instance mx_bsl_laws: laws BSL (mx_ops X u). Proof. constructor; try discriminate; repeat right. intros. apply lower_lattice_laws. exact mx_dotA. exact mx_dot1x. exact mx_dotx1. exact mx_dot_leq. exact mx_dotplsx_. exact mx_dotxpls_. exact mx_dot0x_. exact mx_dotx0_. Qed. (** ** properties of block matrix multiplication *) Lemma mx_dot_colx n1 n2 m p (M1: mx n1 m) (M2: mx n2 m) (N: mx m p): col_mx M1 M2 ⋅ N ≡ col_mx (M1⋅N) (M2⋅N). Proof. intros i j. simpl. unfold mx_dot, col_mx. now case split_spec; intros i' ->. Qed. Lemma mx_dot_xrow n m1 m2 p (M1: mx n m1) (M2: mx n m2) (N: mx p n): N ⋅ row_mx M1 M2 ≡ row_mx (N⋅M1) (N⋅M2). Proof. intros i j. simpl. unfold mx_dot, row_mx. now case split_spec; intros i' ->. Qed. Lemma mx_dot_colrow n1 n2 m p1 p2 (M1: mx n1 m) (M2: mx n2 m) (N1: mx m p1) (N2: mx m p2): col_mx M1 M2 ⋅ row_mx N1 N2 ≡ blk_mx (M1⋅N1) (M1⋅N2) (M2⋅N1) (M2⋅N2). Proof. now rewrite mx_dot_colx, 2mx_dot_xrow. Qed. Lemma mx_dot_rowcol n m1 m2 p (M1: mx n m1) (M2: mx n m2) (N1: mx m1 p) (N2: mx m2 p): row_mx M1 M2 ⋅ col_mx N1 N2 ≡ M1⋅N1 + M2⋅N2. Proof. intros i j. setoid_rewrite sup_cut. unfold row_mx, col_mx. setoid_rewrite split_lshift. setoid_rewrite split_rshift. reflexivity. Qed. Lemma mx_dot_blk n1 n2 m1 m2 p1 p2 (M11: mx n1 m1) (M12: mx n1 m2) (M21: mx n2 m1) (M22: mx n2 m2) (N11: mx m1 p1) (N12: mx m1 p2) (N21: mx m2 p1) (N22: mx m2 p2): blk_mx M11 M12 M21 M22 ⋅ blk_mx N11 N12 N21 N22 ≡ blk_mx (M11⋅N11+M12⋅N21) (M11⋅N12+M12⋅N22) (M21⋅N11+M22⋅N21) (M21⋅N12+M22⋅N22). Proof. setoid_rewrite blk_mx' at 2. setoid_rewrite mx_dot_colrow. now rewrite 4mx_dot_rowcol. Qed. Lemma one_blk_mx n m: (1: mx (n+m) (n+m)) ≡ blk_mx 1 0 0 1. Proof. intros i j. unfold blk_mx, col_mx, row_mx. case split_spec; intros [i' Hi] ->; case split_spec; intros [j' Hj] ->. reflexivity. simpl. unfold mx_one. now setoid_rewrite eqb_ord_lrshift. simpl. unfold mx_one. now setoid_rewrite eqb_ord_rlshift. simpl. unfold mx_one. now setoid_rewrite eqb_ord_rrshift. Qed. End bsl. (** ** matrices have a converse if the underlying monoid has one *) Section cbsl. Context `{L: laws} `{Hl: BSL+CNV ≪ l} {u: ob X}. Notation U := (car (@mor X u u)). Notation mx := (mx U). Canonical Structure lset_ops A := lattice.mk_ops (list A) (fun h k => forall a, List.In a h -> List.In a k) (fun h k => forall a, List.In a h <-> List.In a k) (@app A) (@app A) (assert_false id) (@nil A) (@nil A). Lemma mx_cnvdot_ n m p (M: mx n m) (N: mx m p): (M⋅N)° ≦ N°⋅M°. Proof. intros i j. setoid_rewrite cnvsum. now setoid_rewrite cnvdot. Qed. Lemma mx_cnv_invol n m (M: mx n m): M°° ≡ M. Proof. intros i j. apply cnv_invol. Qed. Lemma mx_cnv_leq n m: Proper (leq ==> leq) (mx_cnv X u n m). Proof. intros ? ? H i j. apply cnv_leq, H. Qed. Lemma mx_cnv_ext n m (M: mx n m): M ≦ M⋅M°⋅M. Proof. intros i j. simpl. unfold mx_dot, mx_cnv. setoid_rewrite dotsumx. rewrite <- (leq_xsup _ _ i) by apply in_seq. rewrite <- (leq_xsup _ _ j) by apply in_seq. apply cnv_ext. Qed. End cbsl. (** ** matrices have a Kleene star if the underlying monoid has one *) Section ka. Context `{L: laws} `{Hl: BKA ≪ l} {u: ob X}. Notation U := (car (@mor X u u)). Notation mx := (mx U). Existing Instance mx_bsl_laws. Section build. (** *** properties of the auxiliary [mx_str_build] functional *) (** we prove a slightly more general property about the auxiliary [mx_str_build] functionnal, so that we can reuse these proofs to establish properties of Kleene star on arbitrary block matrices *) Variables (n m: nat) (sn: mx n n -> mx n n) (sm: mx m m -> mx m m). Notation s := (mx_str_build X u n m sn sm). (** we want to show that the auxiliary [mx_str_build] functionnal preserves some invariants ; this is easier to state with the following definition *) Definition transfers (P: forall {p}, mx p p -> mx p p -> Prop) := (forall M, P M (sn M)) -> (forall M, P M (sm M)) -> (forall M, P M (s M)). (** dedicated tactic to unfold [mx_str_build] without losing the sharing between the various expressions *) Ltac unfold_s M := set (a := sub00_mx M); set (b := sub01_mx M); set (c := sub10_mx M); set (d := sub11_mx M); set (e := sm d); set (f := sn (a + (b⋅e)⋅c)); change (s M) with (blk_mx f (f⋅(b⋅e)) ((e⋅c)⋅f) (e+((e⋅c)⋅f)⋅(b⋅e))). (** [mx_str_build] preserves the left star unfolding axiom *) Lemma mx_str_build_unfold_l: transfers (fun n M sM => 1+M⋅sM ≦ sM). Proof. intros Hf He M. rewrite (to_blk_mx M) at 1. unfold_s M. specialize (Hf (a+b⋅e⋅c)). specialize (He d). fold e in He. fold f in Hf. clearbody a b c d e f. clear - He Hf L Hl. apply leq_cupx. (* TODO: optimize the line below *) rewrite one_blk_mx. apply blk_mx_leq; hlattice. rewrite mx_dot_blk. apply blk_mx_leq. rewrite <- Hf at 3. ra. rewrite <- Hf at 3. ra. rewrite <- He at 2. ra. rewrite <- He at 5 6. ra. Qed. (** [mx_str_build] preserves the left induction rule for star *) Lemma mx_str_build_ind_l: transfers (fun n M sM => forall p (N: mx n p), M⋅N ≦ N -> sM⋅N ≦ N). Proof. intros Hf He M p N. rewrite (to_blk_mx M) at 1. unfold_s M. rewrite (to_col_mx N). set (h := tsub_mx N). set (k:= bsub_mx N). specialize (Hf (a+b⋅e⋅c) p h). specialize (He d p k). fold e in He. fold f in Hf. clearbody a b c d e f h k. clear - He Hf L Hl. rewrite 2blk_mx', 2mx_dot_rowcol, 4mx_dot_colx. setoid_rewrite <-col_mx_cup. setoid_rewrite col_mx_leq_iff. rewrite 2cup_spec. intros [[Ha Hb] [Hc Hd]]. specialize (He Hd). revert Hf. rewrite 2dotplsx, 4cup_spec, <-!dotA. intro Hf. apply apply in Hf; repeat split; repeat (trivial; rewrite ?Ha, ?Hb, ?Hc, ?Hd, ?He, ?Hf). Qed. (** [mx_str_build] preserves the right induction rule for star *) Lemma mx_str_build_ind_r: transfers (fun n M sM => forall p (N: mx p n), N⋅M ≦ N -> N⋅sM ≦ N). Proof. intros Hf He M p N. rewrite (to_blk_mx M) at 1. unfold_s M. rewrite (to_row_mx N). set (h := lsub_mx N). set (k:= rsub_mx N). specialize (Hf (a+b⋅e⋅c) p h). specialize (He d p k). fold e in He. fold f in Hf. clearbody a b c d e f h k. clear - He Hf L Hl. unfold blk_mx. rewrite 2mx_dot_rowcol, 4mx_dot_xrow. setoid_rewrite <-row_mx_cup. setoid_rewrite row_mx_leq_iff. rewrite 2cup_spec. intros [[Ha Hb] [Hc Hd]]. specialize (He Hd). revert Hf. rewrite 2dotxpls, 4cup_spec, !dotA. intro Hf. apply apply in Hf; repeat split; repeat (trivial; rewrite ?Ha, ?Hb, ?Hc, ?Hd, ?He, ?Hf). Qed. End build. (** *** packing everything by induction to get properties of the Kleene star matrix construction *) Lemma mx_str_unfold_l n (M: mx n n): 1 + M ⋅ mx_str _ _ _ M ≦ mx_str _ _ _ M. Proof. induction n. intro i; elim (ord_0_empty i). simpl mx_str. apply (mx_str_build_unfold_l 1); trivial. intros N i j. simpl. unfold mx_dot, scal_mx, mx_one. simpl. setoid_rewrite ord0_unique. simpl. now rewrite cupxb, <-str_unfold_l. Qed. Lemma mx_str_refl n (M: mx n n): 1 ≦ mx_str _ _ _ M. Proof. rewrite <-mx_str_unfold_l. apply leq_xcup. now left. Qed. Lemma mx_str_cons n (M: mx n n): M ⋅ mx_str _ _ _ M ≦ mx_str _ _ _ M. Proof. rewrite <-mx_str_unfold_l at 2. apply leq_xcup. now right. Qed. Lemma mx_str_ind_l n m (M: mx n n) (N: mx n m): M ⋅ N ≦ N -> mx_str _ _ _ M ⋅ N ≦ N. Proof. revert m N. induction n. intros ? ? _ i; elim (ord_0_empty i). simpl mx_str. apply (mx_str_build_ind_l 1); trivial. clear M IHn. intros M p N H i j. simpl. unfold mx_dot, scal_mx. simpl. setoid_rewrite ord0_unique. rewrite cupxb. apply str_ind_l. rewrite <-(H ord0 j) at 2. apply weq_geq. apply cupxb. Qed. Lemma mx_str_ind_r n m (M: mx n n) (N: mx m n): N ⋅ M ≦ N -> N ⋅ mx_str _ _ _ M ≦ N. Proof. revert m N. induction n. intros ? ? _ ? i; elim (ord_0_empty i). simpl mx_str. apply (mx_str_build_ind_r 1); trivial. clear M IHn. intros M p N H i j. simpl. unfold mx_dot, scal_mx. simpl. setoid_rewrite ord0_unique. rewrite cupxb. apply str_ind_r. rewrite <-(H i ord0) at 2. apply weq_geq. apply cupxb. Qed. End ka. (** * Exported matrix construction Matrices on [X] form an [l]-monoid provided that 1/ [X] is an [l]-monoid, and 2/ [l] is rich enough (i.e., it contains at least [BSL], and [BDL] if [l] has residuals ) We express the latter constraint using the following definition *) Definition mx_level l := (if has_div l then BDL+l else BSL+l)%level. Lemma mx_div_level l : DIV ≪ l -> mx_level l ≪ l -> BDL+DIV ≪ l. Proof. rewrite 3lower_spec. unfold mx_level. simpl. case (has_div l). simpl. tauto. intuition discriminate. Qed. Local Hint Extern 0 (_ ≪ _) => solve_lower': typeclass_instances. (* NOTE: the following instance could alternatively be stated as: Instance mx_laws {l h X} {L: laws l X} {Hl: mx_level h ≪ l} u: laws h (mx_ops X u). We don't do this because we want inferred laws instances to be closed (evar free) and "maximal", the inferred instance has the maximal possible level. *) #[export] Instance mx_laws `{L: laws} `{Hl: mx_level l ≪ l} u: laws l (mx_ops X u) |1. Proof. assert (Hl': BSL ≪ l). revert Hl. unfold mx_level. case has_div; intro; solve_lower. constructor; repeat right. intros. apply pw_laws. exact mx_dotA. exact mx_dot1x. exact mx_dotx1. exact mx_dot_leq. exact mx_dotplsx_. exact mx_dotxpls_. exact mx_dot0x_. exact mx_dotx0_. intro. apply mx_cnvdot_. intro. apply mx_cnv_invol. intro. apply mx_cnv_leq. apply mx_cnv_ext. (* TODO: improve constraint resolution *) intro. apply (mx_str_refl (Hl:=lower_mergex _ _ _ Hl' H)). intro. apply (mx_str_cons (Hl:=lower_mergex _ _ _ Hl' H)). intro. apply (mx_str_ind_l (Hl:=lower_mergex _ _ _ Hl' H)). apply (mx_str_ind_r (Hl:=lower_mergex _ _ _ Hl' H)). reflexivity. intros Hl'' n m p M N O i j. simpl. unfold mx_dot, mx_cnv. apply (lower_mergex _ _ _ Hl'') in Hl'. clear Hl Hl''. rewrite capsupx. setoid_rewrite capxsup. setoid_rewrite dotxsum. apply sup_leq; trivial. intro i'. rewrite <- (leq_xsup _ _ i) by apply in_seq. apply capdotx. intro Hl''. pose proof (mx_div_level _ Hl'' Hl). clear Hl Hl' Hl''. intros. simpl. unfold mx_ldv, mx_dot. setoid_rewrite sup_spec. setoid_rewrite inf_spec. setoid_rewrite ldv_spec. clear. split; auto using in_seq. intro Hl''. pose proof (mx_div_level _ Hl'' Hl). clear Hl Hl' Hl''. intros. simpl. unfold mx_rdv, mx_dot. setoid_rewrite sup_spec. setoid_rewrite inf_spec. setoid_rewrite rdv_spec. clear. split; auto using in_seq. Qed. relation-algebra-v.1.7.9/theories/matrix_ext.v000066400000000000000000000203431440504774100214110ustar00rootroot00000000000000(** * matrix_ext: additional properties of matrices *) Require Import kleene normalisation ordinal sups. Require Export matrix. Set Implicit Arguments. (** * [mx_scal] is an homomorphism *) #[export] Instance mx_scal_leq `{lattice.laws}: Proper (leq ==> leq) (@mx_scal X). Proof. intros ? ? H'. apply H'. Qed. #[export] Instance mx_scal_weq `{lattice.laws}: Proper (weq ==> weq) (@mx_scal X) := op_leq_weq_1. Lemma mx_scal_zer `{lattice.laws}: mx_scal bot ≡ bot. Proof. reflexivity. Qed. Lemma mx_scal_one `{laws} n: mx_scal 1 ≡ one n. Proof. reflexivity. Qed. Lemma mx_scal_pls `{lattice.laws} (M N: mx X 1 1): mx_scal (M ⊔ N) ≡ mx_scal M ⊔ mx_scal N. Proof. reflexivity. Qed. Lemma mx_scal_dot `{laws} `{BOT+CUP ≪ l} u (M N: mx (X u u) 1 1): mx_scal (M ⋅ N) ≡ mx_scal M ⋅ mx_scal N. Proof. apply cupxb. Qed. Lemma mx_scal_str `{laws} `{BKA ≪ l} u (M: mx (X u u) 1 1): mx_scal (M^*) ≡ (mx_scal M)^*. Proof. apply str_weq. unfold mx_scal, sub00_mx, tsub_mx, lsub_mx. simpl. setoid_rewrite ord0_unique. apply cupxb. Qed. (** * [scal_mx] preserves inclusions/equalities *) #[export] Instance scal_mx_leq `{lattice.laws}: Proper (leq ==> leq) (@scal_mx X). Proof. now repeat intro. Qed. #[export] Instance scal_mx_weq `{lattice.laws}: Proper (weq ==> weq) (@scal_mx X) := op_leq_weq_1. (** * extracting components of block matrices *) Lemma mx_tsub_col `{lattice.laws} n1 n2 m M1 M2: tsub_mx (@col_mx X n1 n2 m M1 M2) ≡ M1. Proof. intros i j. unfold tsub_mx, col_mx. now rewrite split_lshift. Qed. Lemma mx_bsub_col `{lattice.laws} n1 n2 m M1 M2: bsub_mx (@col_mx X n1 n2 m M1 M2) ≡ M2. Proof. intros i j. unfold bsub_mx, col_mx. now rewrite split_rshift. Qed. Lemma mx_lsub_row `{lattice.laws} n m1 m2 M1 M2: lsub_mx (@row_mx X n m1 m2 M1 M2) ≡ M1. Proof. intros i j. unfold lsub_mx, row_mx. now rewrite split_lshift. Qed. Lemma mx_rsub_row `{lattice.laws} n m1 m2 M1 M2: rsub_mx (@row_mx X n m1 m2 M1 M2) ≡ M2. Proof. intros i j. unfold rsub_mx, row_mx. now rewrite split_rshift. Qed. Lemma mx_sub00_blk `{lattice.laws} n1 n2 m1 m2 a b c d: sub00_mx (@blk_mx X n1 n2 m1 m2 a b c d) ≡ a. Proof. setoid_rewrite mx_tsub_col. apply mx_lsub_row. Qed. Lemma mx_sub01_blk `{lattice.laws} n1 n2 m1 m2 a b c d: sub01_mx (@blk_mx X n1 n2 m1 m2 a b c d) ≡ b. Proof. setoid_rewrite mx_tsub_col. apply mx_rsub_row. Qed. Lemma mx_sub10_blk `{lattice.laws} n1 n2 m1 m2 a b c d: sub10_mx (@blk_mx X n1 n2 m1 m2 a b c d) ≡ c. Proof. setoid_rewrite mx_bsub_col. apply mx_lsub_row. Qed. Lemma mx_sub11_blk `{lattice.laws} n1 n2 m1 m2 a b c d: sub11_mx (@blk_mx X n1 n2 m1 m2 a b c d) ≡ d. Proof. setoid_rewrite mx_bsub_col. apply mx_rsub_row. Qed. (** sub-matrices of the empty matrix are empty *) Lemma blk_mx_0 `{laws} u n1 n2 m1 m2 a b c d: @blk_mx (X u u) n1 n2 m1 m2 a b c d ≡ 0 -> a ≡0 /\ b ≡0 /\ c ≡0 /\ d ≡0. Proof. intro Z. split; [|split; [|split]]. rewrite <-(mx_sub00_blk a b c d). intros ? ?. apply Z. rewrite <-(mx_sub01_blk a b c d). intros ? ?. apply Z. rewrite <-(mx_sub10_blk a b c d). intros ? ?. apply Z. rewrite <-(mx_sub11_blk a b c d). intros ? ?. apply Z. Qed. (** * Kleene star of a block matrix *) Section h. Context `{L:laws} `{Hl:BKA ≪ l} (u: ob X). Instance mx_bka_laws: laws BKA (mx_ops X u) := mx_laws (L:=lower_laws) _. Lemma mx_str_blk' n m (M: mx (X u u) (n+m) (n+m)): M^* ≡ mx_str_build X u n m (mx_str _ _ _) (mx_str _ _ _) M. Proof. apply str_unique'. apply mx_str_build_unfold_l; apply mx_str_unfold_l. apply mx_str_build_ind_l; intros ? ? ?; apply mx_str_ind_l. Qed. (** general result *) Lemma mx_str_blk n1 n2 (a: mx (X u u) n1 n1) (b: mx (X u u) n1 n2) (c: mx (X u u) n2 n1) (d: mx (X u u) n2 n2): let e := d^* in let f := (a+(b⋅e)⋅c)^* in (blk_mx a b c d)^* ≡ blk_mx f (f⋅(b⋅e)) ((e⋅c)⋅f) (e+(e⋅c⋅f)⋅(b⋅e)). Proof. intros e f. rewrite mx_str_blk'. unfold mx_str_build. ra_fold (mx_ops X). now rewrite mx_sub00_blk, mx_sub01_blk, mx_sub10_blk, mx_sub11_blk. Qed. (** specialisation to trigonal block matrices *) Lemma mx_str_trigonal n1 n2 (a: mx (X u u) n1 n1) (b: mx (X u u) n1 n2) (d: mx (X u u) n2 n2): (blk_mx a b 0 d)^* ≡ blk_mx (a^*) (a^*⋅(b⋅d^*)) 0 (d^*). Proof. rewrite mx_str_blk. apply blk_mx_weq; ra. Qed. (** and to diagonal block matrices *) Lemma mx_str_diagonal n1 n2 (a: mx (X u u) n1 n1) (d: mx (X u u) n2 n2): (blk_mx a 0 0 d)^* ≡ blk_mx (a^*) 0 0 (d^*). Proof. rewrite mx_str_trigonal. apply blk_mx_weq; trivial; ra. Qed. Lemma mx_str_1 (M: mx (X u u) 1 1): M^* ≡ scal_mx ((mx_scal M)^*). Proof. intros i j. setoid_rewrite ord0_unique. simpl. unfold mx_str_build, blk_mx, col_mx, row_mx, ordinal.split; simpl. unfold mx_scal, scal_mx, mx_dot, sub00_mx, tsub_mx, lsub_mx; simpl. setoid_rewrite ord0_unique. ra. Qed. (** * induction schemes for proving properties of the Kleene star of a matrix *) (** (used to show that epsilon and derivatives commute with matrix star in [rmx]) *) Lemma mx_str_ind (P: forall n, mx (X u u) n n -> mx (X u u) n n -> Prop): (forall n, Proper (weq ==> weq ==> iff) (P n)) -> (forall M, P O M M) -> (forall M, P _ M (scal_mx ((mx_scal M)^*))) -> (forall n m, (forall M, P n M (M^*)) -> (forall M, P m M (M^*)) -> forall M, P _ M (mx_str_build _ _ n m (fun M => M^*) (fun M => M^*) M)) -> forall n M, P n M (M^*). Proof. intros HP HO H1 Hplus n M. induction n as [|n IHn]. apply HO. change (M^*) with (mx_str _ _ _ M). unfold mx_str, mx_str_build. ra_fold (mx_ops X). setoid_rewrite <-mx_str_1. revert M; refine (Hplus (S O) n _ _); intro M. rewrite mx_str_1. apply H1. apply IHn. Qed. Lemma mx_str_ind' (P: forall n, mx (X u u) n n -> mx (X u u) n n -> Prop): (forall n, Proper (weq ==> weq ==> iff) (P n)) -> (forall M, P O M M) -> (forall M, P _ M (scal_mx ((mx_scal M)^*))) -> (forall n m a b c d, let e := d^* in let be := b⋅e in let ec := e⋅c in let f := (a+be⋅c)^* in let fbe := f⋅be in let ecf := ec⋅f in P m d e -> P n (a+be⋅c) f -> P _ (blk_mx a b c d) (blk_mx f fbe ecf (e+ecf⋅be))) -> forall n M, P n M (M^*). Proof. intros HP HO H1 Hplus. apply (mx_str_ind P HP HO H1). intros n m Hn Hm M. rewrite (to_blk_mx M) at 1. now apply Hplus. Qed. End h. (** * pointwise extension of a funcion to matrices *) Definition mx_map X Y (f: X -> Y) n m (M: mx X n m): mx Y n m := fun i j => f (M i j). #[export] Instance mx_map_leq {X Y: lattice.ops} {f: X -> Y} {Hf: Proper (leq ==> leq) f} n m: Proper (leq ==> leq) (@mx_map _ _ f n m). Proof. intros M N H i j. apply Hf, H. Qed. #[export] Instance mx_map_weq {X Y: lattice.ops} {f: X -> Y} {Hf: Proper (weq ==> weq) f} n m: Proper (weq ==> weq) (@mx_map _ _ f n m). Proof. intros M N H i j. apply Hf, H. Qed. Lemma mx_map_blk {X Y l} {HY: lattice.laws l Y} (f: X -> Y) n1 n2 m1 m2 a b c d: mx_map f (@blk_mx _ n1 n2 m1 m2 a b c d) ≡ blk_mx (mx_map f a) (mx_map f b) (mx_map f c) (mx_map f d). Proof. intros i j. unfold mx_map, blk_mx, col_mx, row_mx. case split; case split; reflexivity. Qed. Lemma mx_map_scal {X Y} (f: X -> Y) x: mx_map f (scal_mx x) = scal_mx (f x). Proof. reflexivity. Qed. Lemma scal_mx_map {X} {Y: lattice.ops} (f: X -> Y) M: f (mx_scal M) = mx_scal (mx_map f M). Proof. reflexivity. Qed. (** * `functional' matrices, with exactly one [z] per line, and [0] everywhere else *) Definition mx_fun {X: lattice.ops} n m f z: mx X n m := fun x y => if eqb_ord y (f x) then z else bot. Lemma mx_dot_fun `{laws} `{BSL ≪ l} u n m f z p (M: mx (X u u) m p) i j: (mx_fun (n:=n) f z ⋅ M) i j ≡ z ⋅ M (f i) j. Proof. simpl. unfold mx_dot. apply antisym. apply leq_supx. intros j' _. unfold mx_fun. case eqb_ord_spec. intros ->. ra. intros _. ra. rewrite <- (leq_xsup _ _ (f i)). 2: apply in_seq. unfold mx_fun. now rewrite eqb_refl. Qed. Lemma mx_dot_kfun1 `{laws} `{BSL ≪ l} u n m i p (M: mx (X u u) m p): (mx_fun (n:=n) (fun _ => i) 1 ⋅ M) ≡ fun _ j => M i j. Proof. intros j k. rewrite mx_dot_fun. apply dot1x. Qed. Lemma mx_map_fun {X Y: lattice.ops} {l} {HY: lattice.laws l Y} n m f z g: g bot ≡ bot -> mx_map g (@mx_fun X n m f z) ≡ @mx_fun Y n m f (g z). Proof. intros Hg i j. unfold mx_map, mx_fun. now case eqb_ord. Qed. relation-algebra-v.1.7.9/theories/monoid.v000066400000000000000000000517321440504774100205200ustar00rootroot00000000000000(** * monoid: typed structures, from ordered monoids to residuated Kleene allegories *) (** We define here all (typed) structures ranging from partially ordered monoids to residuated Kleene allegories *) Require Export lattice. (** * Monoid operations *) (** The following class packages an ordered typed monoid (i.e., a category enriched over a partial order) together with all kinds of operations it can come with: iterations, converse, residuals. We use dummy values when working in structures lacking some operations. Like for [lattice.ops], we use a Class but we mainly exploit Canonical structures inference mechanism. *) Universe M. Class ops := mk_ops { ob: Type@{M}; (** objects of the category *) mor: ob -> ob -> lattice.ops; (** morphisms (each homset is a partially ordered structure) *) dot: forall n m p, mor n m -> mor m p -> mor n p; (** composition *) one: forall n, mor n n; (** identity *) itr: forall n, mor n n -> mor n n; (** strict iteration (transitive closure) *) str: forall n, mor n n -> mor n n; (** Kleene star (reflexive transitive closure) *) cnv: forall n m, mor n m -> mor m n; (** converse (transposed relation) *) ldv: forall n m p, mor n m -> mor n p -> mor m p; (** left residual/factor/division *) rdv: forall n m p, mor m n -> mor p n -> mor p m (** right residual/factor/division *) }. Coercion mor: ops >-> Funclass. Arguments ob : clear implicits. (** Hints for simpl *) Arguments mor {ops} n m / : simpl nomatch. Arguments dot {ops} n m p (x y)%ra / : simpl nomatch. Arguments one {ops} n / : simpl nomatch. Arguments itr {ops} n (x)%ra / : simpl nomatch. Arguments str {ops} n (x)%ra / : simpl nomatch. Arguments cnv {ops} n m (x)%ra / : simpl nomatch. Arguments ldv {ops} n m p (x y)%ra / : simpl nomatch. Arguments rdv {ops} n m p (x y)%ra / : simpl nomatch. (** Notations (note that "+" and "∩" are just specialisations of the notations "⊔" and "⊓", when these operations actually come from a monoid) *) (** ∩ : \cap (company-coq) or INTERSECTION (0x2229) ⋅ : \cdot (company coq) or DOT OPERATOR (0x22c5) *) Notation "x ⋅ y" := (dot _ _ _ x y) (left associativity, at level 25, format "x ⋅ y"): ra_terms. Notation "x + y" := (@cup (mor _ _) x y) (left associativity, at level 50): ra_terms. Notation "x ∩ y" := (@cap (mor _ _) x y) (left associativity, at level 40): ra_terms. Notation "1" := (one _): ra_terms. Notation zer n m := (@bot (mor n m)). Notation top' n m := (@top (mor n m)) (only parsing). Notation "0" := (zer _ _): ra_terms. Notation "x °" := (cnv _ _ x) (left associativity, at level 5, format "x °"): ra_terms. Notation "x ^+" := (itr _ x) (left associativity, at level 5, format "x ^+"): ra_terms. Notation "x ^*" := (str _ x) (left associativity, at level 5, format "x ^*"): ra_terms. Notation "x -o y" := (ldv _ _ _ x y) (right associativity, at level 60): ra_terms. Notation "y o- x" := (rdv _ _ _ x y) (left associativity, at level 61): ra_terms. (** Like for [lattice.ops], we declare most projections as Opaque for typeclass resolution, to save on compilation time. *) #[export] Typeclasses Opaque (* ob mor *) dot one str cnv ldv rdv. Set Implicit Arguments. Unset Strict Implicit. (** * Monoid laws (axioms) *) (** [laws l X] provides the laws corresponding to the various operations of [X], provided these operations belong to the level [l]. For instance, the specification of Kleene star ([str]) is available only if the level contains [STR]. Note that [l] indirectly specifies which lattice operations are available on each homset, via the field [lattice_laws]. We add additional properties when needed (e.g., [dotplsx_]: composition ([dot]) distribute over sums ([pls]), provided there are sums) The partially ordered categorical structure (leq,weq,dot,one) is always present. Like for [lattices.laws], some axioms end with an underscore, either because they can be strengthened to an equality (e.g., [cnvdot_]), or because they become derivable in presence of other axiomes (e.g., [dotx1_]), or both (e.g., [dotplsx_]). Unlike for operations ([ops]), laws are actually inferred by typeclass resolution. *) Class laws (l: level) (X: ops) := { lattice_laws:> forall n m, lattice.laws l (X n m); (** po-monoid laws *) dotA: forall n m p q (x: X n m) y (z: X p q), x⋅(y⋅z) ≡ (x⋅y)⋅z; dot1x: forall n m (x: X n m), 1⋅x ≡ x; dotx1_: CNV ≪ l \/ forall n m (x: X m n), x⋅1 ≡ x; dot_leq_: DIV ≪ l \/ forall n m p, Proper (leq ==> leq ==> leq) (dot n m p); (** slo-monoid laws (distribution of ⋅ over + and 0) *) dotplsx_ `{CUP ≪ l}: DIV ≪ l \/ forall n m p (x y: X n m) (z: X m p), (x+y)⋅z ≦ x⋅z+y⋅z; dotxpls_ `{CUP ≪ l}: DIV ≪ l \/ CNV ≪ l \/ forall n m p (x y: X m n) (z: X p m), z⋅(x+y) ≦ z⋅x+z⋅y; dot0x_ `{BOT ≪ l}: DIV ≪ l \/ forall n m p (x: X m p), 0⋅x ≦ zer n p; dotx0_ `{BOT ≪ l}: DIV ≪ l \/ CNV ≪ l \/ forall n m p (x: X p m), x⋅0 ≦ zer p n; (** converse laws *) cnvdot_ `{CNV ≪ l}: forall n m p (x: X n m) (y: X m p), (x⋅y)° ≦ y°⋅x°; cnv_invol `{CNV ≪ l}: forall n m (x: X n m), x°° ≡ x; cnv_leq `{CNV ≪ l}:>forall n m, Proper (leq ==> leq) (cnv n m); cnv_ext_ `{CNV ≪ l}: CAP ≪ l \/ forall n m (x: X n m), x ≦ x⋅x°⋅x; (** star laws *) str_refl `{STR ≪ l}: forall n (x: X n n), 1 ≦ x^*; str_cons `{STR ≪ l}: forall n (x: X n n), x⋅x^* ≦ x^*; str_ind_l `{STR ≪ l}: forall n m (x: X n n) (z: X n m), x⋅z ≦ z -> x^*⋅z ≦ z; str_ind_r_`{STR ≪ l}: DIV ≪ l \/ CNV ≪ l \/ forall n m (x: X n n) (z: X m n), z⋅x ≦ z -> z⋅x^* ≦ z; itr_str_l `{STR ≪ l}: forall n (x: X n n), x^+ ≡ x⋅x^*; (** modularity law *) capdotx `{AL ≪ l}: forall n m p (x: X n m) (y: X m p) (z: X n p), (x⋅y) ∩ z ≦ x⋅(y ∩ (x°⋅z)); (** left and right residuals *) ldv_spec `{DIV ≪ l}: forall n m p (x: X n m) (y: X n p) z, z ≦ x -o y <-> x⋅z ≦ y; rdv_spec `{DIV ≪ l}: forall n m p (x: X m n) (y: X p n) z, z ≦ y o- x <-> z⋅x ≦ y }. (** * Basic properties *) #[export] Instance dot_leq `{laws}: forall n m p, Proper (leq ==> leq ==> leq) (dot n m p). Proof. destruct dot_leq_. 2: assumption. intros n m p x x' Hx y y' Hy. rewrite <-rdv_spec, Hx, rdv_spec. rewrite <-ldv_spec, Hy, ldv_spec. reflexivity. Qed. #[export] Instance dot_weq `{laws} n m p: Proper (weq ==> weq ==> weq) (dot n m p) := op_leq_weq_2. (** ** Basic properties of the converse operation *) #[export] Instance cnv_weq `{laws} `{CNV ≪ l} n m: Proper (weq ==> weq) (cnv n m) := op_leq_weq_1. Lemma cnv_leq_iff `{laws} `{CNV ≪ l} n m (x y: X n m): x° ≦ y° <-> x ≦ y. Proof. split. intro E. apply cnv_leq in E. now rewrite 2cnv_invol in E. apply cnv_leq. Qed. Lemma cnv_leq_iff' `{laws} `{CNV ≪ l} n m (x: X n m) y: x ≦ y° <-> x° ≦ y. Proof. now rewrite <- cnv_leq_iff, cnv_invol. Qed. Lemma cnv_weq_iff `{laws} `{CNV ≪ l} n m (x y: X n m): x° ≡ y° <-> x ≡ y. Proof. now rewrite 2weq_spec, 2cnv_leq_iff. Qed. Lemma cnv_weq_iff' `{laws} `{CNV ≪ l} n m (x: X n m) y: x ≡ y° <-> x° ≡ y. Proof. now rewrite <-cnv_weq_iff, cnv_invol. Qed. (** simple tactic to move converse from one side to the other of an (in)equation *) Ltac cnv_switch := first [ rewrite cnv_leq_iff | rewrite cnv_leq_iff' | rewrite <-cnv_leq_iff' | rewrite <-cnv_leq_iff | rewrite cnv_weq_iff | rewrite cnv_weq_iff' | rewrite <-cnv_weq_iff' | rewrite <-cnv_weq_iff ]. Lemma cnvdot `{laws} `{CNV ≪ l} n m p (x: X n m) (y: X m p): (x⋅y)° ≡ y°⋅x°. Proof. apply antisym. apply cnvdot_. cnv_switch. now rewrite cnvdot_, 2cnv_invol. Qed. Lemma cnv1 `{laws} `{CNV ≪ l} n: (one n)° ≡ 1. Proof. rewrite <- (dot1x (1°)). cnv_switch. now rewrite cnvdot, cnv_invol, dot1x. Qed. Lemma cnvpls `{laws} `{CNV+CUP ≪ l} n m (x y: X n m): (x+y)° ≡ x°+y°. Proof. apply antisym. cnv_switch. apply leq_cupx; cnv_switch; lattice. apply leq_cupx; cnv_switch; lattice. Qed. Lemma cnvcap `{laws} `{AL ≪ l} n m (x y: X n m): (x ∩ y)° ≡ x° ∩ y°. Proof. apply antisym. apply leq_xcap; apply cnv_leq; lattice. cnv_switch. apply leq_xcap; cnv_switch; lattice. Qed. Lemma cnv0 `{laws} `{CNV+BOT ≪ l} n m: (zer n m)° ≡ 0. Proof. apply antisym; [cnv_switch|]; apply leq_bx. Qed. Lemma cnvtop `{laws} `{CNV+TOP ≪ l} n m: (top: X n m)° ≡ top. Proof. apply antisym; [|cnv_switch]; apply leq_xt. Qed. Lemma cnvneg `{laws} `{CNV+BL ≪ l} n m (x: X n m): (neg x)° ≡ neg (x°). Proof. apply neg_unique. rewrite <-cnvpls, cupC, cupneg. now rewrite cnvtop. rewrite <-cnvcap, capC, capneg. now rewrite cnv0. Qed. (** ** Basic properties of composition *) Lemma dotx1 `{laws} n m (x: X m n): x⋅1 ≡ x. Proof. destruct dotx1_; trivial. cnv_switch. now rewrite cnvdot, cnv1, dot1x. Qed. Lemma dotplsx `{laws} `{CUP ≪ l} n m p (x y: X n m) (z: X m p): (x+y)⋅z ≡ x⋅z+y⋅z. Proof. apply antisym. 2: apply leq_cupx; apply dot_leq; lattice. destruct dotplsx_ as [Hl|E]. 2: apply E. rewrite <-rdv_spec. apply leq_cupx; rewrite rdv_spec; lattice. Qed. Lemma dotxpls `{laws} `{CUP ≪ l} n m p (x y: X m n) (z: X p m): z⋅(x+y) ≡ z⋅x+z⋅y. Proof. apply antisym. 2: apply leq_cupx; apply dot_leq; lattice. destruct dotxpls_ as [Hl|[Hl|E]]. rewrite <-ldv_spec. apply leq_cupx; rewrite ldv_spec; lattice. cnv_switch. rewrite cnvpls,3cnvdot,cnvpls. apply weq_leq, dotplsx. apply E. Qed. Lemma dot0x `{laws} `{BOT ≪ l} n m p (x: X m p): 0⋅x ≡ zer n p. Proof. apply antisym. 2: apply leq_bx. destruct dot0x_ as [Hl|E]. 2: apply E. rewrite <-rdv_spec. apply leq_bx. Qed. Lemma dotx0 `{laws} `{BOT ≪ l} n m p (x: X p m): x⋅0 ≡ zer p n. Proof. apply antisym. 2: apply leq_bx. destruct dotx0_ as [Hl|[Hl|E]]. rewrite <-ldv_spec. apply leq_bx. cnv_switch. rewrite cnvdot,2cnv0. apply weq_leq, dot0x. apply E. Qed. Lemma dotxcap `{laws} `{CAP ≪ l} n m p (x: X n m) (y z: X m p): x ⋅ (y ∩ z) ≦ (x⋅y) ∩ (x⋅z). Proof. apply leq_xcap; apply dot_leq; lattice. Qed. Lemma cnv_ext `{laws} `{CNV ≪ l} n m (x: X n m): x ≦ x⋅x°⋅x. Proof. destruct cnv_ext_; trivial. transitivity ((x⋅1) ∩ x). rewrite dotx1. lattice. rewrite capdotx, <-dotA. apply dot_leq; lattice. Qed. Lemma capxdot `{laws} `{AL ≪ l} n m p (x: X m n) (y: X p m) (z: X p n): (y⋅x) ∩ z ≦ (y ∩ (z⋅x°))⋅x. Proof. cnv_switch. now rewrite cnvdot, 2cnvcap, 2cnvdot, capdotx. Qed. (** ** Basic properties of left division *) (** only those properties that are required to derive [str_ind_r] out of divisions, see module [factor] for other properties *) Lemma ldv_cancel `{laws} `{DIV ≪ l} n m p (x: X n m) (y: X n p): x⋅(x -o y) ≦ y. Proof. now rewrite <-ldv_spec. Qed. Lemma ldv_trans `{laws} `{DIV ≪ l} n m p q (x: X n m) (y: X n p) (z: X n q): (x -o y)⋅(y -o z) ≦ x -o z. Proof. now rewrite ldv_spec, dotA, 2ldv_cancel. Qed. Lemma leq_ldv `{laws} `{DIV ≪ l} n m (x y: X n m): x ≦ y <-> 1 ≦ x -o y. Proof. now rewrite ldv_spec, dotx1. Qed. Lemma ldv_xx `{laws} `{DIV ≪ l} n m (x: X n m): 1 ≦ x -o x. Proof. now rewrite <-leq_ldv. Qed. #[export] Instance ldv_leq `{laws} `{DIV ≪ l} n m p: Proper (leq --> leq ++> leq) (ldv n m p). Proof. intros x x' Hx y y' Hy. now rewrite ldv_spec, <-Hx, <-Hy, <-ldv_spec. Qed. #[export] Instance ldv_weq `{laws} `{DIV ≪ l} n m p: Proper (weq ==> weq ==> weq) (ldv n m p). Proof. simpl. setoid_rewrite weq_spec. split; apply ldv_leq; tauto. Qed. Lemma cnvldv `{laws} `{CNV+DIV ≪ l} n m p (x: X n m) (y: X n p): (x -o y)° ≡ y° o- x°. Proof. apply from_below. intro z. cnv_switch. rewrite ldv_spec. cnv_switch. rewrite cnvdot, cnv_invol. now rewrite rdv_spec. Qed. (** ** Schroeder rules *) Lemma Schroeder_ `{laws} `{BL+CNV ≪ l} n m p (x : X n m) (y : X m p) (z : X n p): x°⋅!z ≦ !y -> x⋅y ≦ z. Proof. intro E. apply leq_cap_neg in E. rewrite negneg in E. apply leq_cap_neg. now rewrite capdotx, capC, E, dotx0. Qed. Lemma Schroeder_l `{laws} `{BL+CNV ≪ l} n m p (x : X n m) (y : X m p) (z : X n p): x⋅y ≦ z <-> x°⋅!z ≦ !y. Proof. split. 2: apply Schroeder_. intro. apply Schroeder_. now rewrite 2negneg, cnv_invol. Qed. (** ** Basic properties of Kleene star *) (** (more properties in [kleene]) *) Lemma str_ext `{laws} `{STR ≪ l} n (x: X n n): x ≦ x^*. Proof. now rewrite <-str_cons, <-str_refl, dotx1. Qed. Lemma str_ind_l' `{laws} `{STR ≪ l} n m (x: X n n) (y z: X n m): y ≦ z -> x⋅z ≦ z -> x^*⋅y ≦ z. Proof. intro E. rewrite E. apply str_ind_l. Qed. Lemma str_ind_l1 `{laws} `{STR ≪ l} n (x z: X n n): 1 ≦ z -> x⋅z ≦ z -> x^* ≦ z. Proof. rewrite <-(dotx1 (x^*)). apply str_ind_l'. Qed. #[export] Instance str_leq `{laws} `{STR ≪ l} n: Proper (leq ==> leq) (str n). Proof. intros x y E. apply str_ind_l1. apply str_refl. rewrite E. apply str_cons. Qed. #[export] Instance str_weq `{laws} `{STR ≪ l} n: Proper (weq ==> weq) (str n) := op_leq_weq_1. Lemma str_snoc `{laws} `{STR ≪ l} n (x: X n n): x^*⋅x ≦ x^*. Proof. apply str_ind_l'. apply str_ext. apply str_cons. Qed. Lemma str_unfold_l `{laws} `{KA ≪ l} n (x: X n n): x^* ≡ 1+x⋅x^*. Proof. apply antisym. apply str_ind_l1. lattice. rewrite dotxpls. apply leq_cupx. rewrite <-str_refl. lattice. rewrite <-str_cons at 2. lattice. rewrite str_cons, (str_refl x). lattice. Qed. Lemma str_itr `{laws} `{KA ≪ l} n (x: X n n): x^* ≡ 1+x^+. Proof. rewrite itr_str_l. apply str_unfold_l. Qed. Lemma cnvstr_ `{laws} `{CNV+STR ≪ l} n (x: X n n): x^*° ≦ x°^*. Proof. cnv_switch. apply str_ind_l1. now rewrite <-str_refl, cnv1. cnv_switch. rewrite cnvdot, cnv_invol. apply str_snoc. Qed. Lemma str_ldv_ `{laws} `{STR+DIV ≪ l} n m (x: X m n): (x -o x)^* ≦ x -o x. Proof. apply str_ind_l1. apply ldv_xx. apply ldv_trans. Qed. Lemma str_ind_r `{laws} `{STR ≪ l} n m (x: X n n) (z: X m n): z⋅x ≦ z -> z⋅x^* ≦ z. Proof. destruct str_ind_r_ as [Hl|[Hl|?]]. 3: auto. - rewrite <-2ldv_spec. intro E. apply str_leq in E. rewrite E. apply str_ldv_. - intros. cnv_switch. rewrite cnvdot, cnvstr_. apply str_ind_l; cnv_switch; now rewrite cnvdot, 2cnv_invol. Qed. Lemma itr_move `{laws} `{STR ≪ l} n (x: X n n): x ⋅ x^* ≡ x^* ⋅ x. Proof. apply antisym. rewrite <-dot1x, (str_refl x), dotA. apply str_ind_r. now rewrite str_snoc at 1. apply str_ind_l'. now rewrite <-str_refl, dotx1. now rewrite str_cons at 1. Qed. Lemma itr_str_r `{laws} `{STR ≪ l} n (x: X n n): x^+ ≡ x^* ⋅ x. Proof. rewrite itr_str_l. apply itr_move. Qed. (** * Subtyping / weakening *) (** laws that hold at any level [h] hold for all level [k ≪ h] *) Lemma lower_laws {h k} {X} {H: laws h X} {le: k ≪ h}: laws k X. Proof. constructor; [ intros; apply lower_lattice_laws | .. ]; try solve [ apply H | intro; apply H; eauto using lower_trans ]. right. apply dotx1. right. apply dot_leq. intro H'. right. intros. apply weq_leq. apply (lower_trans _ _ _ H') in le. apply dotplsx. intro H'. right. right. intros. apply weq_leq. apply (lower_trans _ _ _ H') in le. apply dotxpls. intro H'. right. intros. apply weq_leq. apply (lower_trans _ _ _ H') in le. apply dot0x. intro H'. right. right. intros. apply weq_leq. apply (lower_trans _ _ _ H') in le. apply dotx0. intro H'. right. intros. apply (lower_trans _ _ _ H') in le. apply cnv_ext. intro H'. right. right. apply (lower_trans _ _ _ H') in le. apply str_ind_r. Qed. (** * Reasoning by duality *) (** dual monoid operations: we reverse all arrows (or morphisms), swap the arguments of [dot], and swap left and right residuals. Note that this corresponds to categorical duality, not to be confused with lattice duality, as defined in [lattice.dual]. *) Definition dual (X: ops) := {| ob := ob X; mor n m := X m n; dot n m p x y := y⋅x; one := one; cnv n m := cnv m n; itr := itr; str := str; ldv := rdv; rdv := ldv |}. Notation "X ^op" := (dual X) (at level 1): ra_scope. (** laws on a given structure can be transferred to the dual one *) Lemma dual_laws l X (L: laws l X): laws l X^op. Proof. constructor; simpl; repeat right; intros. apply lattice_laws. symmetry. apply dotA. apply dotx1. apply dot1x. now apply dot_leq. apply weq_leq, dotxpls. apply weq_leq, dotplsx. apply weq_leq, dotx0. apply weq_leq, dot0x. apply weq_leq, cnvdot. apply cnv_invol. now apply cnv_leq. rewrite dotA. apply cnv_ext. apply str_refl. apply str_snoc. now apply str_ind_r. now apply str_ind_l. apply itr_str_r. apply capxdot. apply rdv_spec. apply ldv_spec. Qed. (** this gives us a tactic to prove properties by categorical duality *) Lemma dualize {h} {P: ops -> Prop} (L: forall l X, laws l X -> h ≪ l -> P X) {l X} {H: laws l X} {Hl:h ≪ l}: P X^op. Proof. eapply L. apply dual_laws, H. assumption. Qed. Ltac dual x := apply (dualize x). (** the following are examples of the benefits of such dualities *) #[export] Instance rdv_leq `{laws} `{DIV ≪ l} n m p: Proper (leq --> leq ++> leq) (rdv n m p). Proof. dual @ldv_leq. Qed. #[export] Instance rdv_weq `{laws} `{DIV ≪ l} n m p: Proper (weq ==> weq ==> weq) (rdv n m p). Proof. dual @ldv_weq. Qed. Lemma cnvrdv `{laws} `{CNV+DIV ≪ l} n m p (x: X m n) (y: X p n): (y o- x)° ≡ x° -o y°. Proof. dual @cnvldv. Qed. Lemma dotcapx `{laws} `{CAP ≪ l} n m p (x: X m n) (y z: X p m): (y ∩ z) ⋅ x ≦ (y⋅x) ∩ (z⋅x). Proof. dual @dotxcap. Qed. Lemma Schroeder_r `{laws} `{BL+CNV ≪ l} n m p (x : X n m) (y : X m p) (z : X n p): x⋅y ≦ z <-> !z⋅y° ≦ !x. Proof. dual @Schroeder_l. Qed. (** * Functors (i.e., monoid morphisms) *) Class functor l {X Y: ops} (f': ob X -> ob Y) (f: forall {n m}, X n m -> Y (f' n) (f' m)) := { fn_morphism:> forall n m, morphism l (@f n m); fn_dot: forall n m p (x: X n m) (y: X m p), f (x⋅y) ≡ f x ⋅ f y; fn_one: forall n, f (one n) ≡ 1; fn_itr `{STR ≪ l}: forall n (x: X n n), f (x^+) ≡ (f x)^+; fn_str `{STR ≪ l}: forall n (x: X n n), f (x^*) ≡ (f x)^*; fn_cnv `{CNV ≪ l}: forall n m (x: X n m), f (x°) ≡ (f x)°; fn_ldv `{DIV ≪ l}: forall n m p (x: X n m) (y: X n p), f (x -o y) ≡ f x -o f y; fn_rdv `{DIV ≪ l}: forall n m p (x: X m n) (y: X p n), f (y o- x) ≡ f y o- f x }. (** generating a structure by faithful embedding *) Lemma laws_of_faithful_functor {h l X Y} {L: laws h Y} {Hl: l ≪ h} f' f: @functor l X Y f' f -> (forall n m x y, f n m x ≦ f n m y -> x ≦ y) -> (forall n m x y, f n m x ≡ f n m y -> x ≡ y) -> laws l X. intros Hf Hleq Hweq. assert (Hleq_iff: forall n m x y, f n m x ≦ f n m y <-> x ≦ y). split. apply Hleq. apply fn_leq. assert (Hweq_iff: forall n m x y, f n m x ≡ f n m y <-> x ≡ y). split. apply Hweq. apply fn_weq. assert (L' := @lower_laws _ _ _ L Hl). constructor; repeat right; intro_vars. apply (laws_of_injective_morphism (f n m)); auto using fn_morphism. rewrite <-Hweq_iff, 4fn_dot. apply dotA. rewrite <-Hweq_iff, fn_dot, fn_one. apply dot1x. rewrite <-Hweq_iff, fn_dot, fn_one. apply dotx1. repeat intro. apply Hleq. rewrite 2fn_dot. now apply dot_leq; apply Hleq_iff. apply Hleq. now rewrite fn_cup, 3fn_dot, fn_cup, dotplsx. apply Hleq. now rewrite fn_cup, 3fn_dot, fn_cup, dotxpls. apply Hleq. now rewrite fn_dot, 2fn_bot, dot0x. apply Hleq. now rewrite fn_dot, 2fn_bot, dotx0. intro. apply Hleq. now rewrite fn_cnv, 2fn_dot, 2fn_cnv, cnvdot. intro. rewrite <-Hweq_iff, 2fn_cnv. apply cnv_invol. repeat intro. apply Hleq. rewrite 2fn_cnv. now apply cnv_leq; apply Hleq_iff. apply Hleq. rewrite 2fn_dot,fn_cnv. apply cnv_ext. intro. apply Hleq. rewrite fn_one, fn_str. apply str_refl. intro. apply Hleq. rewrite fn_str, fn_dot, fn_str. apply str_cons. intro. rewrite <-2Hleq_iff, 2fn_dot, fn_str. apply str_ind_l. rewrite <-2Hleq_iff, 2fn_dot, fn_str. apply str_ind_r. intro. apply Hweq. rewrite fn_itr, fn_dot, fn_str. apply itr_str_l. intro. apply Hleq. rewrite fn_cap,2fn_dot,fn_cap,fn_dot,fn_cnv. apply capdotx. intro. rewrite <-2Hleq_iff, fn_dot, fn_ldv. apply ldv_spec. intro. rewrite <-2Hleq_iff, fn_dot, fn_rdv. apply rdv_spec. Qed. (** injection from Booleans into monoids (actually a functor, although we don't need it) *) Definition ofbool {X: ops} {n} (a: bool): X n n := if a then 1 else 0. Global Arguments ofbool {_ _} !_ /. (* does not respect the uniform inheritance condition *) (* Coercion ofbool: bool >-> car. *) (** ML modules *) (* loading this one here explicitly enforces the dependency for proper compilation *) Declare ML Module "coq-relation-algebra.common". Declare ML Module "coq-relation-algebra.fold". (** tricks for reification *) Lemma catch_weq `{L: laws} n m (x y: X n m): (let L:=L in x <=[false]= y) -> x ≡ y. Proof. trivial. Defined. Lemma catch_leq `{L: laws} n m (x y: X n m): (let L:=L in x <=[true]= y) -> x ≦ y. Proof. trivial. Defined. Ltac catch_rel := apply catch_weq || apply catch_leq. relation-algebra-v.1.7.9/theories/move.v000066400000000000000000000062451440504774100202000ustar00rootroot00000000000000(** * move: simple tactics allowing to move subterms inside products (by exploiting commutation hypotheses from the context) *) Require Import kat normalisation rewriting kat_tac. Local Infix " ;" := (dot _ _ _) (left associativity, at level 40): ra_terms. Lemma rmov_x_str `{L: monoid.laws} `{Hl: STR ≪ l} {n} (x e: X n n): x;e ≡ e;x -> x;e^* ≡ e^*;x. Proof. apply str_move. Qed. Lemma rmov_x_itr `{L: monoid.laws} `{Hl: STR ≪ l} {n} (x e: X n n): x;e ≡ e;x -> x;e^+ ≡ e^+;x. Proof. apply itr_move. Qed. Lemma rmov_x_pls `{L: monoid.laws} `{Hl: CUP ≪ l} {n m} x y (e f: X n m): x;e ≡ e;y -> x;f ≡ f;y -> x;(e+f) ≡ (e+f);y. Proof. intros. ra_normalise. now apply cup_weq. Qed. Lemma rmov_x_dot `{L: monoid.laws} {n m p} x y z (e: X n m) (f: X m p): x;e ≡ e;y -> y;f ≡ f;z -> x;(e;f) ≡ (e;f);z. Proof. intros He Hf. rewrite dotA, He, <-dotA, Hf. apply dotA. Qed. Lemma rmov_x_1 `{L: monoid.laws} {n} (x: X n n): x;1 ≡ 1;x. Proof. ra. Qed. Lemma rmov_x_0 `{L: monoid.laws} `{Hl:BOT ≪ l} {n m p q} (x: X n m) (y: X p q): x;0 ≡ 0;y. Proof. ra. Qed. Lemma rmov_x_cap `{L: laws} {n} (x: X n n) a b: x;[a] ≡ [a];x -> x;[b] ≡ [b];x -> x;[a ⊓ b] ≡ [a ⊓ b];x. Proof. hkat. Qed. Lemma rmov_x_cup `{L: laws} {n} (x: X n n) a b: x;[a] ≡ [a];x -> x;[b] ≡ [b];x -> x;[a ⊔ b] ≡ [a ⊔ b];x. Proof. hkat. Qed. Lemma rmov_x_neg `{L: laws} {n} (x: X n n) a: x;[a] ≡ [a];x -> x;[!a] ≡ [!a];x. Proof. hkat. Qed. Lemma rmov_inj `{L: laws} {n} (a b: tst n): [a]⋅[b] ≡ [b]⋅[a]. Proof. kat. Qed. Ltac solve_rmov := first [ eassumption | symmetry; eassumption | eapply rmov_x_dot | apply rmov_x_pls | apply rmov_x_str | apply rmov_x_itr | apply rmov_x_cap | apply rmov_x_cup | apply rmov_x_neg | apply rmov_inj | apply rmov_x_1 | apply rmov_x_0 ]; solve_rmov. Ltac rmov1 x := rewrite ?dotA; (* rewrite ?(dotA _ _ x); *) match goal with | |- context [@dot ?X ?n ?n ?m x ?e] => let H := fresh "H" in let y := fresh "y" in evar (y: car (X m m)); assert (H: x;e ≡ e;y) by (subst y; solve_rmov); rewrite H; subst y; clear H | |- context [@dot ?X _ ?n ?m (?f;x) ?e] => let H := fresh "H" in let y := fresh "y" in evar (y: car (X m m)); assert (H: x;e ≡ e;y) by (subst y; solve_rmov); rewrite <-(dotA f x e), H; subst y; clear H end. Ltac lmov1 x := rewrite <-?dotA; (* rewrite <-?(dotA x); *) match goal with | |- context [@dot ?X ?m ?n ?n ?e x] => let H := fresh "H" in let y := fresh "y" in evar (y: car (X m m)); assert (H: y;e ≡ e;x) by (subst y; solve_rmov); rewrite <-H; subst y; clear H | |- context [@dot ?X ?m _ _ ?e (x;?f)] => let H := fresh "H" in let y := fresh "y" in evar (y: car (X m m)); assert (H: y;e ≡ e;x) by (subst y; solve_rmov); rewrite (dotA e x f), <-H; subst y; clear H end. (* test Goal forall `{laws} n (p q q' r r': X n n) (a b: tst n), r;[a] ≡ [a];r -> r;p ≡ p;r -> r;q ≡ q;r' -> p;[a];r;p^*;(q;q') ≡ 0. Proof. intros. rmov1 r. rmov1 r. lmov1 r'. lmov1 r. lmov1 r. lmov1 r. Abort. *) relation-algebra-v.1.7.9/theories/nfa.v000066400000000000000000000143701440504774100177740ustar00rootroot00000000000000(** * nfa: Non-deterministic Finite Automata *) Require Import positives comparisons. Require Import kleene regex rmx sums matrix_ext prop normalisation. Require dfa. Set Implicit Arguments. Unset Printing Implicit Defensive. (** * Matricial (non deterministic) finite automata *) (** transitions are labelled with regular expressions *) Record t := mk { n: nat; u: rmx 1 n; M: rmx n n; v: rmx n 1 }. Notation "x ^u" := (u x) (at level 2, left associativity, format "x ^u"). Notation "x ^M" := (M x) (at level 2, left associativity, format "x ^M"). Notation "x ^v" := (v x) (at level 2, left associativity, format "x ^v"). (** formal evaluation of matricial automata into regular expressions *) Definition eval A := mx_scal (A^u ⋅ A^M^* ⋅ A^v). Arguments eval !_ /. (** two important classes of automata: - NFA, for which transitions are labelled by sums of letters, and - NFA with epsilon-transitions, where the sums may also include [1] *) Definition is_nfa e := is_01_mx e^u /\ is_pure_mx e^M /\ is_01_mx e^v. Definition is_enfa e := is_01_mx e^u /\ is_simple_mx e^M /\ is_01_mx e^v. (** characterisation of epsilon for automata with a pure transition matrix *) Lemma epsilon_eval n u (M: rmx n n) v: is_pure_mx M -> (epsilon (mx_scal (u⋅M^*⋅v)) <-> epsilon (mx_scal (u⋅v))). Proof. intro HM. rewrite 2epsilon_iff_reflexive_eps. rewrite 2(scal_mx_map (fun e => eps e)). rewrite 3epsilon_mx_dot, epsilon_mx_str, (epsilon_mx_pure HM). now rewrite str0, dotx1. Qed. (** characterisation of derivatives for NFA *) Lemma deriv_eval a n u (M: rmx n n) v: is_01_mx u -> is_pure_mx M -> is_01_mx v -> deriv a (mx_scal (u⋅M^*⋅v)) ≡ mx_scal (u⋅epsilon_mx (deriv_mx a M)⋅M^*⋅v). (* NB: we use epsilon_mx because [deriv_mx a M] is not necessarily a 01 matrix, even if it is equal to such a matrix *) Proof. intros Hu HM Hv. rewrite (scal_mx_map (deriv a)). apply mx_scal_weq. rewrite 2deriv_mx_dot, deriv_mx_str_strict. 2: apply epsilon_mx_pure; assumption. rewrite <-expand_01_mx by assumption. rewrite (deriv_01_mx _ Hu), (deriv_01_mx _ Hv), epsilon_deriv_pure_mx by assumption. ra. Qed. (** * Language of a NFA *) (** (operationally, not through evaluation into regular expressions) *) Fixpoint lang n (M: rmx n n) v u w: Prop := match w with | nil => epsilon (mx_scal (u ⋅ v)) | cons a w => lang M v (u ⋅ epsilon_mx (deriv_mx a M)) w end. (* NB: like above, we have to use [epsilon_mx] because [u ⋅ deriv_mx a M] is only equal to a 01-matrix *) (** the language of the NFA is that obtained by evaluation into regular expressions *) Theorem eval_lang n u M v (H: is_nfa (@mk n u M v)): regex.lang (mx_scal (u ⋅ M^* ⋅ v)) ≡ lang M v u. Proof. unfold regex.lang. intro w. revert u H. induction w; intros u H. unfold derivs. now rewrite epsilon_eval by apply H. unfold derivs. setoid_rewrite <- IHw. clear IHw. revert w. apply lang_weq, deriv_eval; apply H. split. 2: apply H. apply is_01_mx_dot. apply H. apply is_01_mx_epsilon. Qed. (** additional bureaucratic lemmas *) #[export] Instance lang_leq n (M: rmx n n) v: Proper (leq ==> leq) (lang M v). Proof. intros u u' H w. revert u u' H; induction w; intros u u' H; unfold lang; fold lang. apply epsilon_leq. (* TODO: this line should not be needed *) now rewrite H. apply IHw. now rewrite H. Qed. #[export] Instance lang_weq n (M: rmx n n) v: Proper (weq ==> weq) (lang M v) := op_leq_weq_1. #[export] Instance lang_weq' n (M: rmx n n) v: Proper (weq ==> eq ==> iff) (lang M v). Proof. intros ? ? H ? ? <-. now apply lang_weq. Qed. Lemma lang_empty n (u: rmx 1 n) M v: u ≡0 -> lang M v u ≡ bot. Proof. intros Hu w. revert Hu. induction w; intro Hu. simpl; fold_regex. rewrite Hu, dot0x. intuition. simpl. rewrite <-IHw by assumption. now rewrite Hu, dot0x. Qed. (** * Injection of DFA into NFA *) Module dfa. (** injection into NFA *) Definition to_nfa (A: dfa.t): t := mk (mx_fun (fun _ => dfa.u A) 1) (\sum_(a\in dfa.vars A) mx_fun (fun x => dfa.M A x a) (var a)) (fun i _ => ofbool (dfa.v A i)). (** injected DFA are indeed NFA *) Lemma is_nfa_nfa A: is_nfa (to_nfa A). Proof. split. intros ? ?. simpl. unfold mx_fun. case eqb_ord; constructor. split. intros ? ?. simpl. rewrite mx_sup. apply is_pure_sup. intros. unfold mx_fun. case eqb_ord; constructor. intros ? ?. apply is_01_ofbool. Qed. (** evaluation at a given state, into regular expressions *) Notation "A @ i" := (eval (to_nfa (dfa.reroot A i))) (at level 1). (** the language of a DFA coincides with that of the underlying NFA *) Theorem nfa_lang A i: dfa.lang A i ≡ lang ((to_nfa A)^M) ((to_nfa A)^v) (mx_fun (fun _ => i) 1). Proof. intro w. revert i. induction w; intro i; simpl. - rewrite epsilon_iff_reflexive_eps. rewrite (scal_mx_map (fun e => eps e)). change (mx_dot regex_ops regex_tt) with (@dot (mx_ops regex_ops regex_tt)). rewrite (mx_dot_kfun1 (X:=regex_ops)). unfold mx_map, mx_scal. rewrite <-epsilon_iff_reflexive_eps. case dfa.v; reflexivity. - rewrite IHw. clear IHw. change (mx_dot regex_ops regex_tt) with (@dot (mx_ops regex_ops regex_tt)). rewrite (mx_dot_kfun1 (X:=regex_ops)). unfold mx_fun, epsilon_mx, mx_map. setoid_rewrite mx_sup. setoid_rewrite deriv_sup. setoid_rewrite epsilon_sup. split. + intros [Ha H]. eapply lang_leq. 2: apply H. clear w H. intros _ j. rewrite <-(leq_xsup _ _ a Ha). case eqb_ord. 2: apply leq_bx. simpl. now rewrite eqb_refl. + intros H. split. case (List.in_dec (cmp_dec _) a (dfa.vars A)) as [Ha|Ha]. assumption. apply lang_empty in H. elim H. clear H. intros o j. apply sup_b. intros b Hb. case eqb_ord. 2: reflexivity. simpl. case eqb_spec; simpl. 2: reflexivity. intros <-. elim (Ha Hb). eapply lang_leq. 2: apply H. clear w H. intros _ j. apply leq_supx. intros b _. case eqb_spec. 2: intros; ra. intros ->. simpl. case eqb_spec; simpl. 2: intro; lattice. intros <-. now rewrite eqb_refl. Qed. (** the language of the DFA is that obtained by evaluation into regular expressions *) Corollary eval_lang A i: regex.lang A@i ≡ dfa.lang A i. Proof. setoid_rewrite eval_lang. 2: apply is_nfa_nfa. now rewrite nfa_lang. Qed. End dfa. Coercion dfa.to_nfa: dfa.t >-> t. relation-algebra-v.1.7.9/theories/normalisation.v000066400000000000000000000620401440504774100221040ustar00rootroot00000000000000(** * normalisation: generic normalisation procedure and associated tactics *) Require Import kleene syntax powerfix. Set Implicit Arguments. Section n. Variables (s t: positive -> positive). Notation expr := (expr s t). Local Arguments weq {_} _ _: simpl never. Local Arguments leq {_} _ _: simpl never. Local Hint Extern 0 (_ ≪ _) => solve_lower || solve_lower': typeclass_instances. Ltac fold_expr l := ra_fold (expr_ops s t l). (** * normalisation procedure We normalise expressions in such a way that: - sums are flattened, left-associated, ordered (without duplicates), and without bottom or top elements - intersections are simply cleaned w.r.t bottom and top elements - products are left-associated, and possibly distributed over sums (specified using a boolean parameter) - unit and annihilators are ruled out as much as possible - strict iterations and Kleene stars do not contain inner iterations - converses are pushed to the leaves Some additional simplifications are possible, we plan to integrate them lazily, on demand. The normalisation function is defined in such a way that - the level of the resulting expression cannot increase w.r.t. the level of the initial one, and - it yields an expression which is equal to the starting one, *at the level of that expression* (in particular unions cannot be used to simplify lonely iterations) *) (** ** normalising sums *) Fixpoint insert_pls n m (x: expr n m): expr n m -> expr n m := match x with | e_zer _ _ => fun y => y | e_top _ _ => fun y => top | e_pls x z => fun y => match cmp y z with | Lt => insert_pls x y+z | Gt => x+z+y | Eq => x+z end | x => fun y => match cmp x y with | Lt => x+y | Gt => y+x | Eq => y end end%ast. Fixpoint pls' n m (y: expr n m): expr n m -> expr n m := match y with | e_zer _ _ => fun x => x | e_top _ _ => fun y => top | e_pls y z => fun x => insert_pls (pls' y x) z | y => fun x => insert_pls x y end%ast. Lemma insert_pls_level n m (x y: expr n m): e_level (insert_pls x y) ≪ CUP+e_level x+e_level y. Proof. induction x; simpl; try case cmp_spec; simpl e_level; intros; try solve_lower'. rewrite IHx1. solve_lower'. Qed. Lemma insert_pls_pls: forall l n m (x y: expr n m) {Hl: CUP+e_level x ≪ l}, insert_pls x y ==_[l] y+x. Proof. assert (D: forall n m (x y: expr n m) l, CUP+e_level x ≪ l -> y+x ==_[l] match cmp x y with Lt => x+y | Gt => y+x | Eq => y end). intros. case cmp_spec; intros; try subst; lattice. intros; symmetry; induction x; simpl e_level in *; try (apply D; solve_lower). lattice. lattice. simpl insert_pls. case cmp_spec; intros. subst. lattice. fold_expr l. rewrite <- IHx1 by solve_lower'. lattice. lattice. Qed. Lemma pls'_level n m (x y: expr n m): e_level (pls' x y) ≪ CUP+e_level x+e_level y. Proof. induction x; simpl pls'; simpl e_level; rewrite ?insert_pls_level, ?IHx1, ?IHx; solve_lower'. Qed. Lemma pls'pls: forall l n m (x y: expr n m) {Hl: CUP+e_level x+e_level y ≪ l}, pls' x y ==_[l] x+y. Proof. induction x; simpl e_level; simpl pls'; intros y Hl; try (subst; now rewrite insert_pls_pls by solve_lower'). lattice. lattice. rewrite insert_pls_pls, IHx1 by (rewrite ?pls'_level; solve_lower'). lattice. Qed. Lemma pls'x0_level n m (x: expr n m): e_level (pls' x 0) ≪ e_level x. Proof. induction x; try reflexivity. simpl. now rewrite insert_pls_level, IHx1. Qed. Lemma pls'x0 n m (x: expr n m) l `{CUP+e_level x ≪ l}: pls' x 0 ==_[l] x. Proof. induction x; try reflexivity. simpl pls'. simpl e_level in *. rewrite insert_pls_pls, IHx1. apply cupC. solve_lower'. rewrite pls'x0_level. solve_lower'. Qed. (** ** normalising intersections we simply remove zeros and tops (TODO: normalise like for sums, distribute over sums) *) Definition cap' n m (x y: expr n m) := if is_top x then y else if is_top y then x else if is_zer x ||| is_zer y then e_zer _ _ else e_cap x y. Lemma cap'cap l n m (x y: expr n m) {Hl: CAP+e_level x+e_level y ≪ l}: cap' x y ==_[l] x ∩ y. Proof. symmetry. unfold cap'. revert Hl. case is_top_spec. intros. apply captx. case is_top_spec. intros. apply capxt. case is_zer_spec. intros. apply capbx. case is_zer_spec. intros. apply capxb. reflexivity. Qed. Lemma cap'_level n m (x y: expr n m): e_level (cap' x y) ≪ CAP+e_level x+e_level y. Proof. unfold cap'. case is_top_spec. solve_lower'. case is_top_spec. solve_lower'. case is_zer_spec. solve_lower'. case is_zer_spec; solve_lower'. Qed. (** ** normalising products (without ones or zeros, left associative, and possibly distributed over sums) *) (** whether we distribute the products over sums is controlled by the following parameter *) Variable distribute: bool. Ltac case_distribute := match goal with |- context[distribute] => case distribute | _ => idtac end. (* [dot_l x y ≡ x⋅y] *) Fixpoint dot_l n m (x: expr n m): forall p, expr m p -> expr n p := match x in syntax.expr _ _ n m return forall p, expr m p -> expr n p with | e_zer _ _ => fun p y => 0 | e_one _ => fun p y => y | e_pls x1 x2 => fun p y => if distribute then pls' (dot_l x1 y) (dot_l x2 y) else (x1+x2)⋅y | x => fun p y => x ⋅ y end%ast. (* [dot_r y x ≡ x⋅y] *) Fixpoint dot_r m p (y: expr m p): forall n, expr n m -> expr n p := match y in syntax.expr _ _ m p return forall n, expr n m -> expr n p with | e_zer _ _ => fun n x => 0 | e_one _ => fun n x => x | e_pls y1 y2 => fun n x => if distribute then pls' (dot_r y1 x) (dot_r y2 x) else dot_l x (y1+y2) | e_dot y z => fun n x => dot_l (dot_r y x) z | y => fun n x => dot_l x y end%ast. Definition dot' n m p (x: expr n m) (y: expr m p) := dot_r y x. Lemma dot_l_level n m p (x: expr n m) (y: expr m p): e_level (dot_l x y) ≪ e_level x + e_level y. Proof. revert p y. induction x; intros q z; simpl dot_l; case_distribute; simpl e_level; rewrite ?pls'_level, ?IHx1, ?IHx2; solve_lower'. Qed. Lemma dot_r_level n m p (x: expr n m) (y: expr m p): e_level (dot_r y x) ≪ e_level x + e_level y. Proof. revert n x. induction y; intros q z; simpl dot_r; case_distribute; simpl e_level; rewrite ?pls'_level, ?dot_l_level, ?IHy1, ?IHy2; solve_lower'. Qed. Lemma dot'_level n m p (x: expr n m) (y: expr m p): e_level (dot' x y) ≪ e_level x + e_level y. Proof. apply dot_r_level. Qed. Lemma dot_l_weq l n m p (x: expr n m) (y: expr m p) {Hl: e_level x + e_level y ≪ l}: x⋅y ==_[l] dot_l x y. Proof. revert p y Hl. induction x; intros q z Hl; simpl dot_l; case_distribute; simpl e_level in Hl; try reflexivity. apply dot0x. apply dot1x. now rewrite dotplsx, pls'pls, <-IHx1, <-IHx2 by (rewrite ?dot_l_level; solve_lower'). Qed. Lemma dot'dot l n m p (x: expr n m) (y: expr m p) {Hl: e_level y+e_level x ≪ l}: dot' x y ==_[l] x⋅y. Proof. symmetry. unfold dot'. revert n x Hl. induction y; simpl e_level; intros q z Hl; simpl dot_r; case_distribute; try reflexivity; try (apply dot_l_weq; solve_lower'). apply dotx0. apply dotx1. now rewrite dotxpls, pls'pls, <-IHy1, <-IHy2 by (rewrite ?dot_r_level; solve_lower'). now rewrite <-dot_l_weq, <-IHy1, dotA by (rewrite ?dot_r_level; solve_lower'). Qed. (** ** normalising converses by pushing them down towards the leaves *) Fixpoint cnv' n m (x: expr n m): expr m n := match x with | e_zer _ _ => 0 | e_top _ _ => top | e_one _ => 1 | e_pls x y => cnv' x + cnv' y | e_cap x y => cnv' x ∩ cnv' y | e_neg x => ! cnv' x (* TODO: normalise complements *) | e_dot x y => dot' (cnv' y) (cnv' x) (* we need to reverse parentheses *) | e_ldv x y => e_rdv (cnv' x) (cnv' y) (* TODO: normalise residuals? *) | e_rdv x y => e_ldv (cnv' x) (cnv' y) | e_itr x => (cnv' x)^+ | e_str x => (cnv' x)^* | e_cnv x => x | e_var a => (e_var a)° end%ast. Lemma cnv'_level n m (x: expr n m): e_level (cnv' x) ≪ CNV+e_level x. Proof. induction x; simpl cnv'; simpl e_level; rewrite ?dot'_level, ?pls'_level, ?cap'_level, ?IHx1, ?IHx2, ?IHx; solve_lower'. Qed. Lemma cnv'cnv l n m (x: expr n m) {Hl: CNV+e_level x ≪ l}: cnv' x ==_[l] x°. Proof. symmetry. induction x; simpl cnv'; simpl e_level in Hl; rewrite ?dot'dot, ?e_str' by (rewrite ?cnv'_level; solve_lower'). apply cnv0. apply cnvtop. apply cnv1. rewrite cnvpls. apply cup_weq; [apply IHx1|apply IHx2]; solve_lower'. rewrite cnvcap. apply cap_weq; [apply IHx1|apply IHx2]; solve_lower'. rewrite cnvneg. apply neg_weq, IHx. solve_lower'. rewrite cnvdot. apply dot_weq; [apply IHx2|apply IHx1]; solve_lower'. rewrite cnvitr. apply itr_weq, IHx. solve_lower'. rewrite cnvstr. apply str_weq, IHx. solve_lower'. apply cnv_invol. rewrite cnvldv. apply rdv_weq; [apply IHx1|apply IHx2]; solve_lower'. rewrite cnvrdv. apply ldv_weq; [apply IHx1|apply IHx2]; solve_lower'. reflexivity. Qed. (** ** removing toplevel iterations in an iterated sum *) Fixpoint remove n m (x: expr n m): expr n m := match x with | e_itr x => x | e_pls x y => pls' (remove x) (remove y) | x => x end. Definition itr' n (x: expr n n): expr n n := (if is_zer x then 0 else if is_top x then top else (remove x)^+)%ast. Definition str' n (x: expr n n): expr n n := (if is_zer x then 1 else if is_top x then top else (remove x)^*)%ast. Lemma remove_level n m (x: expr n m): e_level (remove x) ≪ e_level x. Proof. induction x; cbn; rewrite ?pls'_level; solve_lower'. Qed. Lemma itr'_level n (x: expr n n): e_level (itr' x) ≪ STR+e_level x. Proof. unfold itr'. case is_zer_spec. reflexivity. case is_top_spec. solve_lower'. cbn. now rewrite remove_level. Qed. Lemma str'_level n (x: expr n n): e_level (str' x) ≪ STR+e_level x. Proof. unfold str'. case is_zer_spec. reflexivity. case is_top_spec. solve_lower'. cbn. now rewrite remove_level. Qed. Lemma remove_spec_dep l n m (x: expr n m): forall (H: n=m) {Hl: STR+e_level x ≪ l}, (cast H eq_refl (remove x))^+ ==_[l] (cast H eq_refl x)^+. Proof. induction x; cbn; trivial; intros H Hl. - subst. cbn. rewrite itr_pls_itr, pls'pls by (rewrite 2remove_level; solve_lower'). rewrite <-(IHx1 eq_refl), <-(IHx2 eq_refl) by solve_lower'. simpl cast. apply itr_pls_itr. - now rewrite 2cast_eq, itr_invol. Qed. Lemma remove_spec l n (x: expr n n) {Hl: STR+e_level x ≪ l}: (remove x)^+ ==_[l] x^+. Proof. apply (remove_spec_dep _ eq_refl). Qed. Lemma itr'itr l n (x: expr n n) {Hl: STR+e_level x ≪ l}: itr' x ==_[l] x^+. Proof. symmetry. unfold itr'. revert Hl. case is_zer_spec. intros. apply itr0. case is_top_spec. intros. apply itrtop. intros. symmetry. now apply remove_spec. Qed. Lemma remove_spec_dep' l n m (x: expr n m): forall (H: n=m) {Hl: STR+e_level x ≪ l}, (cast H eq_refl (remove x))^* ==_[l] (cast H eq_refl x)^*. Proof. induction x; cbn; trivial; intros H Hl. - subst. simpl cast. rewrite str_pls_str. rewrite <-(IHx1 eq_refl), <-(IHx2 eq_refl) by solve_lower'. simpl cast. rewrite <-str_pls_str. apply str_weq. apply pls'pls. rewrite 2remove_level; solve_lower'. - rewrite 2cast_eq. apply antisym. apply str_leq. apply itr_ext. apply str_ind_l1. apply str_refl. now rewrite itr_str_l, str_cons, str_trans. Qed. Lemma remove_spec' l n (x: expr n n) {Hl: STR+e_level x ≪ l}: (remove x)^* ==_[l] x^*. Proof. apply (remove_spec_dep' _ eq_refl). Qed. Lemma str'str l n (x: expr n n) {Hl: STR+e_level x ≪ l}: str' x ==_[l] x^*. Proof. symmetry. unfold str'. revert Hl. case is_zer_spec. intros. apply str0. case is_top_spec. intros. apply strtop. intros. symmetry. now apply remove_spec'. Qed. (** ** global normalisation function *) Fixpoint norm n m (x: expr n m): expr n m := match x with | e_zer _ _ => 0 | e_top _ _ => top | e_one _ => 1 | e_pls x y => pls' (norm x) (norm y) | e_cap x y => cap' (norm x) (norm y) | e_neg x => e_neg (norm x) | e_dot x y => dot' (norm x) (norm y) | e_ldv x y => e_ldv (norm x) (norm y) | e_rdv x y => e_rdv (norm x) (norm y) | e_itr x => itr' (norm x) | e_str x => str' (norm x) | e_cnv x => cnv' (norm x) | e_var a => e_var a end%ast. Lemma norm_level n m (x: expr n m): e_level (norm x) ≪ e_level x. Proof. induction x; simpl norm; simpl e_level; rewrite ?dot'_level, ?pls'_level, ?cap'_level, ?cnv'_level, ?itr'_level, ?str'_level, ?IHx1, ?IHx2, ?IHx; solve_lower'. Qed. Lemma norm_weq l n m (x: expr n m) {Hl: e_level x ≪ l}: norm x ==_[l] x. Proof. induction x; simpl norm; simpl e_level in Hl; try reflexivity; rewrite ?pls'pls, ?cap'cap, ?dot'dot, ?itr'itr, ?str'str, ?cnv'cnv, ?e_str' by (rewrite ?norm_level; solve_lower'). apply cup_weq; [apply IHx1|apply IHx2]; solve_lower'. apply cap_weq; [apply IHx1|apply IHx2]; solve_lower'. apply neg_weq, IHx. solve_lower'. apply dot_weq; [apply IHx1|apply IHx2]; solve_lower'. apply itr_weq, IHx. solve_lower'. apply str_weq, IHx. solve_lower'. apply cnv_weq, IHx. solve_lower'. apply ldv_weq; [apply IHx1|apply IHx2]; solve_lower'. apply rdv_weq; [apply IHx1|apply IHx2]; solve_lower'. Qed. (** * partial decision procedure for expressions containment [<==] The following function tries to prove [x ≦ y], for some expressions [x] and [y]. - this function always terminates, but using powerfix allows us to write a clean code, without bothering about termination - this algorithm is not complete, of course. - like for syntactic comparison ([syntax.expr_compare]), we need to generalise the function to work on distinct expression types *) Definition expr_leq := powerfix 100 (fun leq tt n m (x: expr n m) p q (y: expr p q) => let leq {n m} x {p q} y := leq tt n m x p q y in match x,y with | e_zer _ _, _ | _, e_top _ _ | e_one _, e_one _ | e_one _, e_str _ => true | e_one _, e_itr y => leq x y | e_var a, e_var b => eqb a b | e_pls x x', _ => leq x y &&& leq x' y | _, e_cap y y' => leq x y &&& leq x y' | e_cap x x', _ => leq x y ||| leq x' y | _, e_pls y y' => leq x y ||| leq x y' | @e_dot _ _ _ _ u _ x x', @e_dot _ _ _ _ v _ y y' (* split using one? *) | @e_ldv _ _ _ u _ _ y x', @e_ldv _ _ _ v _ _ x y' (* ldv_spec in the other cases? *) | @e_rdv _ _ _ u _ _ y x', @e_rdv _ _ _ v _ _ x y' => eqb u v &&& leq x y &&& leq x' y' | e_one _, e_ldv x y | e_one _, e_rdv x y | e_neg y, e_neg x | e_itr x, e_itr y | e_itr x, e_str y | e_str x, e_str y | e_cnv x, e_cnv y => leq x y | _,_ => false end) (fun _ _ _ _ _ _ _ => false) tt. Lemma expr_leq_correct_dep l: forall n m (x: expr n m) p q (y: expr p q), forall Hnp: n=p, forall Hmq: m=q, expr_leq x y = true -> e_level x + e_level y ≪ l -> cast Hnp Hmq x <==_[l] y. Proof. (* TODO: this proof could be factorised, using a more appropriate case disjunction, it's not that easy to setup, however *) unfold expr_leq. apply powerfix_invariant. 2: discriminate. intros leq IH n m x p q y Hnp Hmq H Hl. (** FIXME : subst here causes an effect leak *) destruct x; simpl e_level in Hl; repeat match goal with [ H : ?m = ?n |- _ ] => rewrite H in *; clear H end. - rewrite cast_eq. lattice. - destruct y; simpl e_level in Hl; try discriminate H. + now rewrite cast_eq. + apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. - destruct y; simpl e_level in Hl; try discriminate H; try subst. + subst. rewrite cast_eq. lattice. + now rewrite cast_eq. + subst. apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + rewrite <-itr_ext. apply IH. assumption. solve_lower'. + rewrite cast_eq. apply str_refl. + subst. rewrite cast_eq. apply ldv_spec. rewrite dotx1. apply (IH _ _ _ _ _ _ eq_refl eq_refl H); solve_lower'. + subst. rewrite cast_eq. apply rdv_spec. rewrite dot1x. apply (IH _ _ _ _ _ _ eq_refl eq_refl H); solve_lower'. - destruct y; simpl e_level in Hl; try discriminate H; try subst; try (apply landb_spec in H as [H1 H2]; apply leq_cupx; [ apply (IH _ _ _ _ _ _ eq_refl eq_refl H1); solve_lower' | apply (IH _ _ _ _ _ _ eq_refl eq_refl H2); solve_lower' ]). lattice. - destruct y; simpl e_level in Hl; try discriminate H; try subst; try (rewrite cast_eq; apply leq_capx; apply lorb_spec in H as [H|H]; (apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; [auto |solve_lower'])). lattice. rewrite cast_eq. apply landb_spec in H as [H1 H2]. apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H1; [|solve_lower']. apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H2; [|solve_lower']. hlattice. - destruct y; simpl e_level in Hl; try discriminate H; try subst. + lattice. + apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + rewrite cast_eq. apply neg_leq. apply (IH _ _ _ _ _ _ eq_refl eq_refl H). solve_lower'. - destruct y; simpl e_level in Hl; try discriminate H; try subst. + lattice. + apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + rewrite cast_eq. apply landb_spec in H as [H' H]. apply landb_spec in H as [H1 H2]. revert H'. case eqb_spec. 2: discriminate. intros ? _. subst. apply dot_leq. apply (IH _ _ _ _ _ _ eq_refl eq_refl H1); solve_lower'. apply (IH _ _ _ _ _ _ eq_refl eq_refl H2); solve_lower'. - destruct y; simpl e_level in Hl; try discriminate H; try subst. + lattice. + subst. apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + rewrite cast_eq. apply itr_leq. apply (IH _ _ _ _ _ _ eq_refl eq_refl H); solve_lower'. + rewrite cast_eq. rewrite itr_str_l, <-(str_cons y). apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H. 2: solve_lower'. simpl cast in H. now rewrite H. - destruct y; simpl e_level in Hl; try discriminate H; try subst. + lattice. + subst. apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + rewrite cast_eq. apply str_leq. apply (IH _ _ _ _ _ _ eq_refl eq_refl H); solve_lower'. - destruct y; simpl e_level in Hl; try discriminate H; try subst. + lattice. + apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + rewrite cast_eq. apply cnv_leq. apply (IH _ _ _ _ _ _ eq_refl eq_refl H); solve_lower'. - destruct y; simpl e_level in Hl; try discriminate H; try subst. + lattice. + apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + rewrite cast_eq. apply landb_spec in H as [H' H]. apply landb_spec in H as [H1 H2]. revert H'. case eqb_spec. 2: discriminate. intros ? _. subst. apply ldv_leq. apply (IH _ _ _ _ _ _ eq_refl eq_refl H1); solve_lower'. apply (IH _ _ _ _ _ _ eq_refl eq_refl H2); solve_lower'. - destruct y; simpl e_level in Hl; try discriminate H; try subst. + lattice. + apply leq_xcup. apply lorb_spec in H as [H|H]; apply (IH _ _ _ _ _ _ eq_refl eq_refl) in H; auto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + rewrite cast_eq. apply landb_spec in H as [H' H]. apply landb_spec in H as [H1 H2]. revert H'. case eqb_spec. 2: discriminate. intros ? _. subst. apply rdv_leq. apply (IH _ _ _ _ _ _ eq_refl eq_refl H1); solve_lower'. apply (IH _ _ _ _ _ _ eq_refl eq_refl H2); solve_lower'. - destruct y; simpl e_level in Hl; try discriminate H; subst. + lattice. + apply leq_xcup. apply lorb_spec in H as [H|H]; eapply IH in H; eauto; solve_lower'. + apply landb_spec in H as [H1 H2]. apply leq_xcap; apply IH; trivial; solve_lower'. + apply eqb_eq in H. subst. now rewrite cast_eq. Qed. (** correctness of the comparison function *) Corollary expr_leq_correct l n m (x y: expr n m): expr_leq x y = true -> e_level x + e_level y ≪ l -> x <==_[l] y. Proof. apply (expr_leq_correct_dep x y eq_refl eq_refl). Qed. (** * Associated tactics *) (** parametrised (in)equality tests, where we decide to check for an equality or an inequality depending on the Boolean [b]. This helps to factorise tactics code. *) Definition expr_leq_or_weq (b: bool) n m (x y: expr n m) := if b then expr_leq x y else eqb x y. Lemma expr_leq_or_weq_correct b l n m (x y: expr n m): e_level x + e_level y ≪ l -> expr_leq_or_weq b x y -> @leq_or_weq b (expr_lattice_ops _ _ l _ _) x y. Proof. intros Hl H. destruct b. apply expr_leq_correct; assumption. apply eqb_eq in H as <-. reflexivity. Qed. End n. (** normalisation lemma, suited for use with reification ; we use two "let-in" constructs to make it possible to isolate computations of the normal forms (using vm_compute) from the unfolding of the interpretation function (using "unfold" selectively) *) Lemma ra_normalise d b `{L: laws} f' (f: positive -> Pack X f') n m (x y: expr _ _ n m) (Hl: e_level x + e_level y ≪ l): (let x' := norm d x in let y' := norm d y in packed_eval f x' <=[b]= packed_eval f y') -> packed_eval f x <=[b]= packed_eval f y. Proof. unfold packed_eval. intro H. eapply leq_or_weq_weq; [symmetry.. | eassumption]; (eapply norm_weq; [ | eassumption]); solve_lower'. Qed. (** reflexivity-by-normalisation lemma, suited for use with reification *) Lemma ra b `{L: laws} f' (f: positive -> Pack X f') n m (x y: expr _ _ n m) (Hl: e_level x + e_level y ≪ l): expr_leq_or_weq b (norm true x) (norm true y) = true -> packed_eval f x <=[b]= packed_eval f y. Proof. intro H. apply (ra_normalise true b). assumption. intros x' y'. subst x' y'. eapply expr_leq_or_weq_correct in H. destruct b; apply (H _ L). now rewrite 2norm_level. Qed. (** ** [ra]: solve goals by normalisation. In case of an inequality, the above comparison function is used, so that [apply antisym; ra] might solve goals which are not solved by [ra]. (e.g., [a+a^*+b^* ≡ b+a^*+b^*]) *) Ltac ra := let go h L tac := ra_reify h; let lhs:=fresh "lhs" in let rhs:=fresh "rhs" in intros ? ? ? ? lhs rhs; apply (@ra _ _ _ L _ _ _ _ lhs rhs); [ tac || fail "RelationAlgebra: invalid reification (please report)" | close_by_reflection true || fail "not provable by relation algebra normalisation" ] in intros; catch_rel; let L:=fresh "L" in intro L; let l:=match type of L with laws ?l _ => l end in lazymatch goal with | H: ?h ≪ l |- _ => go h L ltac:(rewrite <- H; reflexivity) | _ => go l L ltac:(reflexivity || destruct l; reflexivity) end. (* (* vm_compute in value of h, even in presence of evars [DOES NOT WORK] *) *) (* Ltac evm_compute h := *) (* (has_evar h; *) (* idtac "warning: there are evar, cannot use vm_compute"; *) (* compute in (value of h)) *) (* || vm_compute in (value of h). *) (** ** [ra_normalise/simpl]: normalise the current goal *) Ltac ra_normalise_ distribute := let go h L tac := ra_reify h; let tenv:=fresh "tenv" in let env:=fresh "env" in let src:=fresh "src" in let tgt:=fresh "tgt" in let lhs:=fresh "lhs" in let rhs:=fresh "rhs" in intros ? ? ? ? lhs rhs; apply (@ra_normalise distribute _ _ _ L _ _ _ _ lhs rhs); [ tac; reflexivity || fail "RelationAlgebra: invalid reification (please report)" | let lhs':=fresh "lhs'" in let rhs':=fresh "rhs'" in intros lhs' rhs'; vm_compute in (value of lhs'), (value of rhs'); unfold sigma_add in env, tenv; unfold lhs', rhs', packed_eval, eval, val, src, tgt, env, tenv, sigma_get; unfold leq_or_weq; clear tenv env src tgt lhs rhs lhs' rhs' L ] in catch_rel; let L:=fresh "L" in intro L; let l:=match type of L with laws ?l _ => l end in lazymatch goal with | H: ?h ≪ l |- _ => go h L ltac:(rewrite <- H; reflexivity) | _ => go l L ltac:(reflexivity || destruct l; reflexivity) end. (** [ra_normalise] distribut products over sums, while [ra_simpl] does not *) Ltac ra_simpl := ra_normalise_ false. Ltac ra_normalise := ra_normalise_ true. relation-algebra-v.1.7.9/theories/ordinal.v000066400000000000000000000311651440504774100206610ustar00rootroot00000000000000(** * ordinal: finite ordinals, sets of finite ordinals *) Require Import List. Require Import PeanoNat. Require Import common comparisons. Set Implicit Arguments. (** * Boolean strict order on natural numbers *) Fixpoint ltb i j := match i,j with | O,S _ => true | _,O => false | S i, S j => ltb i j end. Declare Scope ltb_scope. Notation "i < j" := (ltb i j = true) : ltb_scope. Notation "i <= j" := (ltb j i = false) : ltb_scope. Delimit Scope ltb_scope with ltb. Local Open Scope ltb_scope. Lemma ltb_plus_l i m n: i i m+i n <= i -> i-n < m. Proof. revert n. induction i; destruct n; simpl; auto. discriminate. Qed. Lemma lt_n_1 a: a<1 -> a=0. Proof. destruct a as [|[|?]]; trivial; discriminate. Qed. Definition lt_ge_dec i n: {i nat -> Prop) (H0: forall n, P (S n) 0) (HS: forall n i, P n i -> P (S n) (S i)): forall n i, i P n i. Proof. induction n; intros i Hi. destruct i; discriminate. destruct i. apply H0. now apply HS, IHn. Qed. Lemma nat_ind_2: forall P: nat -> Prop, P 0 -> P 1 -> (forall n, P n -> P ((S (S n)))) -> forall n, P n. Proof. intros P H0 H1 HSS. assert (G: forall m, P m /\ P (S m)). induction m. split; assumption. destruct IHm as [IHm IHSm]. split. assumption. apply (HSS m), IHm. intro n. destruct (G n). assumption. Qed. (** * Ordinals *) (** we use a record rather than a dependent inductive in order to - get slightly more efficient computations - get simpler proofs using a Boolean strict order also simplifies proofs w.r.t. using the lt predicate from the standard library *) Record ord n := Ord {nat_of_ord:> nat; ord_lt: nat_of_ord i=j. Proof. destruct i; destruct j; simpl; intro. subst. f_equal. apply UIP_cmp. Qed. Lemma eqb_ord_spec n (i j: ord n): reflect (i=j) (eqb_ord i j). Proof. unfold eqb_ord. case eqb_spec; intro E; constructor. apply eq_ord, E. congruence. Qed. Definition ord_compare {n} (i j: ord n) := nat_compare i j. Lemma ord_compare_spec n (i j: ord n): compare_spec (i=j) (ord_compare i j). Proof. unfold ord_compare. case cmp_spec; constructor; try apply eq_ord; congruence. Qed. Canonical Structure cmp_ord n := mk_cmp _ (@eqb_ord_spec n) _ (@ord_compare_spec n). (** ** basic properties *) (** [ord 0] is empty *) Lemma ord_0_empty: ord 0 -> False. Proof. intros [[|?]]; discriminate. Qed. (** [ord 1] has only one element: 0 *) Lemma ord0_unique: forall i: ord 1, i=ord0. Proof. intros [[|i] Hi]; apply eq_ord. reflexivity. destruct i; discriminate. Qed. (** induction scheme for ordinals *) Lemma ord_ind' (P: forall n, ord n -> Prop) (H0: forall n, P (S n) ord0) (HS: forall n i, P n i -> P (S n) (ordS i)): forall n i, P n i. Proof. induction n. intro i. elim (ord_0_empty i). destruct i as [[|i] Hi]. replace (Ord 0 Hi) with (@ord0 n) by now apply eq_ord. apply H0. replace (Ord (S i) Hi) with (ordS (@Ord n i Hi)) by now apply eq_ord. apply HS, IHn. Qed. (** ** sequence of all ordinals below [n] *) Fixpoint seq n: list (ord n) := match n with | 0 => nil | S n => cons ord0 (map ordS (seq n)) end. (** completeness of the above sequence *) Lemma in_seq: forall {n} (i: ord n), In i (seq n). Proof. induction i using ord_ind'. now left. right. rewrite in_map_iff. eauto. Qed. (** ** shifting ans splitting ordinals *) (** shifting *) Definition lshift {m n} (i: ord m): ord (m+n) := Ord i (ltb_plus_l _ _ _ (ord_lt i)). Definition rshift {m n} (i: ord n): ord (m+n) := Ord (m+i) (ltb_plus_r _ _ _ (ord_lt i)). (** spliting the sequence of all ordinals *) Lemma seq_cut n m: seq (n+m) = map lshift (seq n) ++ map rshift (seq m). Proof. induction n; simpl. rewrite <-(map_id (seq m)) at 1. apply map_ext. intros [i Hi]. apply eq_ord. reflexivity. rewrite IHn, map_app. f_equal. apply eq_ord. reflexivity. rewrite 3map_map. f_equal; apply map_ext; intros [i Hi]; apply eq_ord; reflexivity. Qed. (** splitting an ordinal *) Definition split {n m} (i: ord (n+m)): ord n + ord m := match lt_ge_dec i n with | left Hi => inl _ (Ord _ Hi) | right Hj => inr _ (Ord _ (ltb_minus _ _ _ (ord_lt i) Hj)) end. Inductive split_case n m (i: ord (n+m)): ord n + ord m -> Set := | split_l: forall j: ord n, i=lshift j -> split_case i (inl j) | split_r: forall j: ord m, i=rshift j -> split_case i (inr j). Lemma split_spec n m (i: ord (n+m)): split_case i (split i). Proof. unfold split. case lt_ge_dec; constructor; apply eq_ord; simpl. reflexivity. destruct i as [j Hj]. simpl in *. revert n m e Hj. induction j; destruct n; simpl; auto. discriminate. intros. f_equal. eapply IHj; eassumption. Qed. (** basic properties of split and shifting *) Lemma split_lshift n m i: @split n m (lshift i) = inl i. Proof. case split_spec; intros j E. f_equal. apply eq_ord. injection E. congruence. exfalso. injection E. clear E. destruct i as [i Hi]. simpl. intros ->. rewrite leb_plus_r in Hi. discriminate. Qed. Lemma split_rshift n m i: @split n m (rshift i) = inr i. Proof. case split_spec; intros j E. exfalso. injection E. clear E. destruct j as [j Hj]. simpl. intros <-. rewrite leb_plus_r in Hj. discriminate. f_equal. apply eq_ord. symmetry. injection E. apply Nat.add_cancel_l. Qed. Lemma eqb_ord_lrshift n m i j: eqb_ord (@lshift n m i) (@rshift n m j) = false. Proof. destruct i as [i Hi]; destruct j as [j Hj]. unfold eqb_ord. simpl. case eqb_spec; trivial. intro E. rewrite E in Hi. rewrite leb_plus_r in Hi. discriminate. Qed. Lemma eqb_ord_rlshift n m i j: eqb_ord (@rshift n m i) (@lshift n m j) = false. Proof. rewrite eqb_sym. apply eqb_ord_lrshift. Qed. Lemma eqb_ord_rrshift n m i j: eqb_ord (@rshift n m i) (@rshift n m j) = eqb_ord i j. Proof. destruct i as [i Hi]; destruct j as [j Hj]. unfold eqb_ord. simpl. do 2 case eqb_spec; trivial. intros E E'. elim E. eapply Nat.add_cancel_l, E'. congruence. Qed. Lemma split_ord0 m: @split 1 m ord0 = inl ord0. Proof. reflexivity. Qed. Lemma split_ordS m i: @split 1 m (ordS i) = inr i. Proof. case split_spec; intros j Hj. rewrite (ord0_unique j) in Hj. discriminate. destruct i; destruct j. injection Hj. intros <-. f_equal. now apply eq_ord. Qed. (** * Finite sets of ordinals as ordinals *) (** we encode a finite subset of [ord n] as an element of [ord (2^n)], using the coding of the characteristic function of the set as a bitvector of length n since we need to compute a little bit with these encoded sets, we first define the bijection on natural numbers, before encapsulating into ordinals. *) Module set. (** ** on natural numbers *) (** [xO' i = 2*i], [xI' i = 2*i+1] *) Fixpoint xO' i := match i with 0 => 0 | S i => S (S (xO' i)) end. Fixpoint xI' i := match i with 0 => 1 | S i => S (S (xI' i)) end. (** from characteristic functions to natural numbers: accumulate bits until a given length [(n)] is reached *) Fixpoint of_fun' n (f: nat -> bool): nat := match n with | 0 => 0 | S n => if f 0 then xI' (of_fun' n (fun i => f (S i))) else xO' (of_fun' n (fun i => f (S i))) end. (** [od x] returns the pair [(o,y)] s.t. [x = 2*y+o] *) Fixpoint od x := match x with | O => (false,O) | S O => (true,O) | S (S x) => let (o,x) := od x in (o,S x) end. (** testing membership: read the [i]th bit, using [od] *) (** this function is presented in such a strange way to get efficiency: the partial application [mem' n x] reduces to the pattern matching function that precisely corresponds to the membership function of [x]. For instance, [mem' 4 {1,2}] reduces to [fun i => match i with 0 | 3 => false | 1 | 2 => true | _ => assert_false end] *) Fixpoint mem' n x := match n with | 0 => fun i => assert_false false | S n => let (o,x) := od x in let f := mem' n x in fun i => match i with O => o | S i => f i end end. (** correctness of [mem'] and [of_fun'] *) Lemma od_xO i: od (xO' i) = (false,i). Proof. induction i; simpl. reflexivity. now rewrite IHi. Qed. Lemma od_xI i: od (xI' i) = (true,i). Proof. induction i; simpl. reflexivity. now rewrite IHi. Qed. Lemma mem_of_fun' n: forall f i, i mem' n (of_fun' n f) i = f i. Proof. induction n; intros f i Hi; simpl. destruct i; discriminate. case_eq (f 0); intro H. rewrite od_xI. destruct i. congruence. now rewrite IHn. rewrite od_xO. destruct i. congruence. now rewrite IHn. Qed. (** ** encapsulation into ordinals *) (** bounds about the various operations *) Lemma xO_bound: forall n i, i xO' i < double n. Proof. now apply ltb_ind. Qed. Lemma xI_bound: forall n i, i xI' i < double n. Proof. now apply ltb_ind. Qed. Lemma of_fun_bound: forall n f, of_fun' n f < pow2 n. Proof. induction n; intro f. reflexivity. simpl. case f. now apply xI_bound. now apply xO_bound. Qed. Lemma od_bound a: forall n, a < double n -> snd (od a) < n. Proof. induction a using nat_ind_2; intros n Hn; simpl. destruct n; simpl. discriminate. reflexivity. destruct n; simpl. discriminate. reflexivity. revert IHa. case od; simpl. intros o a' IH. destruct n. discriminate. apply IH, Hn. Qed. (** extending a Boolean function on ordinals into a function on natural numbers *) Definition app' n (f: ord n -> bool) (i: nat) := match lt_ge_dec i n with | left H => f (Ord i H) | _ => false end. (** encapsulation of the various operations into ordinals *) Definition xO n (i: ord n): ord (double n) := Ord (xO' i) (xO_bound _ _ (ord_lt i)). Definition xI n (i: ord n): ord (double n) := Ord (xI' i) (xI_bound _ _ (ord_lt i)). Definition mem n (x: ord (pow2 n)) (i: ord n) := mem' n x i. Definition of_fun n (f: ord n -> bool): ord (pow2 n) := Ord (of_fun' n (app' f)) (of_fun_bound _ _). (** retraction from [ord n -> bool] into [ord (pow2 n)] *) Lemma mem_of_fun n (f: ord n -> bool) i: mem (of_fun f) i = f i. Proof. unfold mem, of_fun. simpl. rewrite mem_of_fun' by apply ord_lt. unfold app'. case lt_ge_dec. intros. f_equal. now apply eq_ord. rewrite (ord_lt i). discriminate. Qed. (** injectivity of the [od] function *) Lemma od_inj a b: od a = od b -> a = b. Proof. revert b. induction a using nat_ind_2; intros [|[|b]]; simpl; trivial; (try (case od; discriminate)); (try discriminate). intro. f_equal. f_equal. apply IHa. revert H. case od. case od. congruence. Qed. (** extensionality on natural numbers *) Lemma ext' n: forall a b, a b (forall i, i mem' n a i = mem' n b i) -> a = b. Proof. induction n; simpl; intros a b Ha Hb H. rewrite lt_n_1 by assumption. now apply lt_n_1. apply od_inj. revert H. generalize (od_bound _ _ Ha), (od_bound _ _ Hb). case od. intros oa a' Ha'. case od. intros ob b' Hb'. intro H. f_equal. apply (H 0 eq_refl). apply IHn; trivial. intros i Hi. apply (H (S i) Hi). Qed. (** extensionality on ordinals (i.e., with [mem_of_fun], mem/of_fun form a bijection) *) Lemma ext n (a b: ord (pow2 n)): (forall i, mem a i = mem b i) -> a = b. Proof. intro H. apply eq_ord. eapply ext'. apply ord_lt. apply ord_lt. intros i Hi. apply (H (Ord i Hi)). Qed. (** ** additional lemmas *) Lemma xO_0 n: xO (@ord0 n) = @ord0 (S (double n)). Proof. now apply eq_ord. Qed. Lemma xO_S n (i: ord n): xO (ordS i) = ordS (ordS (xO i)). Proof. now apply eq_ord. Qed. Lemma xI_0 n: xI (@ord0 n) = ordS (@ord0 (double n)). Proof. now apply eq_ord. Qed. Lemma xI_S n (i: ord n): xI (ordS i) = ordS (ordS (xI i)). Proof. now apply eq_ord. Qed. Lemma mem_xO_0 n (f: ord (pow2 n)): @mem (S n) (xO f) ord0 = false. Proof. unfold mem. simpl. now rewrite od_xO. Qed. Lemma mem_xO_S n (f: ord (pow2 n)) i: @mem (S n) (xO f) (ordS i) = mem f i. Proof. unfold mem. simpl. now rewrite od_xO. Qed. Lemma mem_xI_0 n (f: ord (pow2 n)): @mem (S n) (xI f) ord0 = true. Proof. unfold mem. simpl. now rewrite od_xI. Qed. Lemma mem_xI_S n (f: ord (pow2 n)) i: @mem (S n) (xI f) (ordS i) = mem f i. Proof. unfold mem. simpl. now rewrite od_xI. Qed. End set. relation-algebra-v.1.7.9/theories/pair.v000066400000000000000000000053111440504774100201560ustar00rootroot00000000000000(** * pair: encoding pairs of ordinals as ordinals *) (** more precisely, [ord n * ord m] into [ord (n*m)] *) Require Import Psatz PeanoNat Compare_dec Euclid. Require Import ordinal. Set Asymmetric Patterns. Set Implicit Arguments. Local Open Scope ltb_scope. (** equivalence between our Boolean strict order on [nat], and the standard one from the standard library *) Lemma ltb_lt x y: ltb x y = true <-> lt x y. Proof. revert y. induction x; destruct y; simpl. split. discriminate. inversion 1. split. lia. trivial. split. discriminate. inversion 1. rewrite IHx. lia. Qed. (** auxiliary lemma *) Lemma mk_lt n m x y: x y y*n+x < n*m. Proof. setoid_rewrite ltb_lt. nia. Qed. (** since [x] is bounded by [n], we encode the pair [(x,y)] as [y*n+x] *) Definition mk n m (x: ord n) (y: ord m): ord (n*m). destruct x as [x Hx]; destruct y as [y Hy]. apply Ord with (y*n+x). now apply mk_lt. Defined. Lemma ord_nm_lt_O_n {n m} (x: ord (n*m)): lt 0 n. Proof. destruct n. elim (ord_0_empty x). lia. Qed. (** first projection, by modulo *) Definition pi1 {n m} (p: ord (n*m)): ord n := let '(divex _ x Hx _) := eucl_dev n (ord_nm_lt_O_n p) p in (Ord x (proj2 (ltb_lt _ _) Hx)). (** second projection, by division *) Definition pi2 {n m} (p: ord (n*m)): ord m. destruct (eucl_dev n (ord_nm_lt_O_n p) p) as [y x Hx Hy]. apply Ord with y. unfold gt in *. destruct p as [p Hp]. simpl in Hy. rewrite Hy in Hp. clear p Hy. destruct (le_lt_dec m y) as [Hy|Hy]. 2: now apply ltb_lt. exfalso. apply ltb_lt in Hp. abstract nia. Defined. Lemma euclid_unique n: lt 0 n -> forall x y x' y', lt x n -> lt x' n -> y*n+x = y'*n+x' -> y=y' /\ x=x'. Proof. intros Hn x y x' y' Hx Hx' H. rewrite Nat.mul_comm, (Nat.mul_comm y') in H. split. erewrite Nat.div_unique. 3: eassumption. 2: assumption. rewrite H. eapply Nat.div_unique. 2: symmetry; eassumption. assumption. erewrite Nat.mod_unique. 3: eassumption. 2: assumption. rewrite H. eapply Nat.mod_unique. 2: symmetry; eassumption. assumption. Qed. (** projections behave as expected *) Lemma pi1mk n m: forall x y, pi1 (@mk n m x y) = x. Proof. intros [x Hx] [y Hy]. unfold pi1, mk. case eucl_dev. intros y' x' Hx' H. apply eq_ord. apply euclid_unique in H as [_ ?]; auto. nia. now apply ltb_lt. Qed. Lemma pi2mk n m: forall x y, pi2 (@mk n m x y) = y. Proof. intros [x Hx] [y Hy]. unfold pi2, mk. case eucl_dev. intros y' x' Hx' H. apply eq_ord. simpl. apply euclid_unique in H as [? _]; auto. nia. now apply ltb_lt. Qed. (** surjective pairing *) Lemma mkpi12 n m: forall p, @mk n m (pi1 p) (pi2 p) = p. Proof. intros [p Hp]. unfold pi1, pi2, mk. case eucl_dev. simpl. intros y x Hx Hy. apply eq_ord. simpl. now rewrite Hy. Qed. relation-algebra-v.1.7.9/theories/positives.v000066400000000000000000000035711440504774100212560ustar00rootroot00000000000000(** * positives: basic facts about binary positive numbers *) Require Export BinNums. Require Import comparisons. (** positives as a [cmpType] *) Fixpoint eqb_pos i j := match i,j with | xH,xH => true | xI i,xI j | xO i, xO j => eqb_pos i j | _,_ => false end. Lemma eqb_pos_spec: forall i j, reflect (i=j) (eqb_pos i j). Proof. induction i; intros [j|j|]; simpl; (try case IHi); constructor; congruence. Qed. Fixpoint pos_compare i j := match i,j with | xH, xH => Eq | xO i, xO j | xI i, xI j => pos_compare i j | xH, _ => Lt | _, xH => Gt | xO _, _ => Lt | _,_ => Gt end. Lemma pos_compare_spec: forall i j, compare_spec (i=j) (pos_compare i j). Proof. induction i; destruct j; simpl; try case IHi; try constructor; congruence. Qed. Canonical Structure cmp_pos := mk_cmp _ eqb_pos_spec _ pos_compare_spec. (** positive maps (for making environments) *) (** we redefine such trees here rather than importing them from the standard library: since we do not need any proof about them, this avoids us a heavy Require Import *) Section e. Variable A: Type. Inductive sigma := sigma_empty | N(l: sigma)(o: option A)(r: sigma). Fixpoint sigma_get default m i := match m with | N l o r => match i with | xH => match o with None => default | Some a => a end | xO i => sigma_get default l i | xI i => sigma_get default r i end | _ => default end. Fixpoint sigma_add i v m := match m with | sigma_empty => match i with | xH => N sigma_empty (Some v) sigma_empty | xO i => N (sigma_add i v sigma_empty) None sigma_empty | xI i => N sigma_empty None (sigma_add i v sigma_empty) end | N l o r => match i with | xH => N l (Some v) r | xO i => N (sigma_add i v l) o r | xI i => N l o (sigma_add i v r) end end. End e. relation-algebra-v.1.7.9/theories/powerfix.v000066400000000000000000000057141440504774100210750ustar00rootroot00000000000000(** * powerfix: bounded fixpoint operator *) (** we define a fixpoint operator which recursively unfolds an open-recursive function with recursive depth at most [2^n], for arbitrary [n]. This allows us to define arbitrary recursive functions, without needing to prove their termination. The operator is defined in a computationally efficient way. (We already used such a trick in ATBR ; it's simplified here thanks to the introduction of eta in Coq v8.4 *) Require Import common. Set Implicit Arguments. Section powerfix. Variables A B: Type. Notation Fun := (A -> B). (** the three following functions "iterate" their [f] argument lazily: iteration stops whenever [f] no longer makes recursive calls. - [powerfix' n f k] iterates [f] at most [(2^n-1)] times and then yields to [k] - [powerfix n f k] iterates [f] at most [(2^n)] times and then yields to [k] - [linearfix n f k] iterates [f] at most [n] times and then yields to [k] *) Fixpoint powerfix' n (f: Fun -> Fun) (k: Fun): Fun := fun a => match n with O => k a | S n => f (powerfix' n f (powerfix' n f k)) a end. Definition powerfix n f k a := f (powerfix' n f k) a. Fixpoint linearfix n (f: Fun -> Fun) (k: Fun): Fun := fun a => match n with O => k a | S n => f (linearfix n f k) a end. (** simple lemmas about [2^n] *) Lemma pow2_S n: pow2 n = S (pred (pow2 n)). Proof. induction n. reflexivity. simpl. now rewrite IHn. Qed. Lemma pred_pow2_Sn n: pred (pow2 (S n)) = S (double (pred (pow2 n))). Proof. simpl. now rewrite pow2_S. Qed. (** characterisation of [powerfix] with [linearfix] *) Section linear_carac. Variable f: Fun -> Fun. Lemma linearfix_S: forall n k, f (linearfix n f k) = linearfix n f (f k). Proof. induction n; intros k; simpl. reflexivity. now rewrite IHn. Qed. Lemma linearfix_double: forall n k, linearfix n f (linearfix n f k) = linearfix (double n) f k. Proof. induction n; intros k. reflexivity. simpl linearfix. now rewrite <-IHn, <-linearfix_S. Qed. Lemma powerfix'_linearfix: forall n k, powerfix' n f k = linearfix (pred (pow2 n)) f k. Proof. induction n; intros. reflexivity. rewrite pred_pow2_Sn. simpl. now rewrite <-linearfix_double, 2IHn. Qed. Theorem powerfix_linearfix: forall n k, powerfix n f k = linearfix (pow2 n) f k. Proof. intros. unfold powerfix. now rewrite powerfix'_linearfix, pow2_S. Qed. End linear_carac. (** [powerfix_invariant] gives an induction principle for [powerfix], that does not care about the number of iterations -- in particular, the trivial "emptyfix" function : ([fun f k a => k a]) satisfies the same induction principle, so that this can only be used to reason about partial correctness. *) Section invariant. Variable P: Fun -> Prop. Lemma powerfix_invariant: forall n f g, (forall k, P k -> P (f k)) -> P g -> P (powerfix n f g). Proof. intros n f g Hf Hg. apply Hf. revert g Hg. induction n; intros g Hg; simpl; auto. Qed. End invariant. End powerfix. relation-algebra-v.1.7.9/theories/prop.v000066400000000000000000000022421440504774100202030ustar00rootroot00000000000000(** * prop: Propositions ([Prop]) as a bounded distributive lattice *) Require Import lattice. (** lattice operations *) Canonical Structure Prop_lattice_ops: lattice.ops := {| leq := impl; weq := iff; cup := or; cap := and; neg := not; bot := False; top := True |}. (** bounded distributive lattice laws (we could get a Boolean lattice by assuming excluded middle) *) #[export] Instance Prop_lattice_laws: lattice.laws (BDL+STR+CNV+DIV) Prop_lattice_ops. Proof. constructor; (try apply Build_PreOrder); simpl; repeat intro; try discriminate; tauto. Qed. (** we could also equip Prop with a flat monoid structure, but this is not useful in practice *) (* Require Import monoid. CoInductive Prop_unit := Prop_tt. Canonical Structure Prop_ops: monoid.ops := {| ob := Prop_unit; mor n m := Prop_lattice_ops; dot n m p := and; one n := True; str n x := True; cnv n m x := x; ldv n m p := impl; rdv n m p := impl |}. Notation Prop' := (Prop_ops Prop_tt Prop_tt). Instance Prop_laws: laws (BDL+STR+CNV+DIV) Prop_ops. Proof. constructor; [intros; apply Prop_lattice_laws |..]; (try now left); repeat right; simpl; try tauto. Qed. *) relation-algebra-v.1.7.9/theories/regex.v000066400000000000000000000371511440504774100203440ustar00rootroot00000000000000(** * regex: the (flat) model of regular expressions *) Require Import syntax kleene boolean lang lset sums normalisation. Set Implicit Arguments. (** We consider regular expressions over the alphabet of positive numbers (It would be nicer to keep the alphabet as a parameter, since we need to instantiate it with a more structured type in [kat_completeness]; it's however realy painful to do so, since this model is used thoroughly through several files (rmx, nfa, ka_completeness) and this would prevent us from using modules to structure the namespace. The current solution consists in retracting into positives the structured type used in [kat_completness], using [denum].) *) Definition sigma := positive. (** the inductive type of regular expressions *) Inductive regex := | r_zer | r_one | r_pls(e f: regex) | r_dot(e f: regex) | r_str(e: regex) | r_var(a: sigma). (** strict iteration is a derived operation *) Definition r_itr e := r_dot e (r_str e). (** inclusion into relational expressions *) Fixpoint to_expr (e: regex): expr_(BKA) (fun _ => xH) (fun _ => xH) xH xH := match e with | r_zer => 0 | r_one => 1 | r_pls e f => to_expr e + to_expr f | r_dot e f => to_expr e ⋅ to_expr f | r_str e => (to_expr e)^* | r_var a => e_var a end. (** * Regular expressions form a Kleene algebra *) (** ** Operations *) (** we inherit the preorder and the equivalence relation from generic expressions *) Definition r_leq e f := to_expr e <==_[BKA] to_expr f. Definition r_weq e f := to_expr e ==_[BKA] to_expr f. (** structures for regular expression operations *) Canonical Structure regex_lattice_ops := {| car := regex; leq := r_leq; weq := r_weq; cup := r_pls; bot := r_zer; cap := assert_false r_pls; top := assert_false r_zer; neg := assert_false id |}. (** we use singleton type for the object(s), since this is a flat structure *) CoInductive regex_unit := regex_tt. Canonical Structure regex_ops := {| ob := regex_unit; mor n m := regex_lattice_ops; dot n m p := r_dot; one n := r_one; itr n := r_itr; str n := r_str; cnv n m := assert_false r_str; ldv n m p := assert_false r_dot; rdv n m p := assert_false r_dot |}. (** shorthand fore [regex], when a morphism is required *) Notation regex' := (regex_ops regex_tt regex_tt). Definition var a: regex' := r_var a. (** folding regular expressions to expose canonical preojections *) Ltac fold_regex := ra_fold regex_ops regex_tt. (** ** Laws *) (** laws are inherited for free, by faithful embedding into general expressions *) #[export] Instance regex_laws: laws BKA regex_ops. Proof. apply (laws_of_faithful_functor (f:=fun _ _: regex_unit => to_expr)). constructor; try discriminate; trivial. intros. constructor; try discriminate; trivial; now intros ???. intros. symmetry. apply itr_str_l. tauto. tauto. Qed. #[export] Instance regex_lattice_laws: lattice.laws BKA regex_lattice_ops. Proof. exact (@lattice_laws _ _ regex_laws regex_tt regex_tt). Qed. (** * Predicates on regular expressions: 01, simple, pure *) (** 01 regular expressions are those not containing variables, they reduce either to [0] or to [1] *) Inductive is_01: regex' -> Prop := | is_01_zer: is_01 0 | is_01_one: is_01 1 | is_01_pls: forall e f, is_01 e -> is_01 f -> is_01 (e+f) | is_01_dot: forall e f, is_01 e -> is_01 f -> is_01 (e⋅f) | is_01_str: forall e, is_01 e -> is_01 (e^*). (** simple regular expressions are those reducing to a sum of variables, possibly plus [1] (actually a bit less, since, e.g., [0⋅a*b] reduces to [0] but is forbidden) *) Inductive is_simple: regex' -> Prop := | is_simple_zer: is_simple 0 | is_simple_one: is_simple 1 | is_simple_var: forall a, is_simple (var a) | is_simple_pls: forall e f, is_simple e -> is_simple f -> is_simple (e+f) | is_simple_dot: forall e f, is_01 e -> is_simple f -> is_simple (e⋅f) | is_simple_str: forall e, is_01 e -> is_simple (e^*). (** pure regular expressions are those without star, which reduce to a sum of variables (same thing as above) *) Inductive is_pure: regex' -> Prop := | is_pure_zer: is_pure 0 | is_pure_var: forall a, is_pure (var a) | is_pure_pls: forall e f, is_pure e -> is_pure f -> is_pure (e+f) | is_pure_dot: forall e f, is_01 e -> is_pure f -> is_pure (e⋅f). (** [ofbool] produces 01 expressions *) Lemma is_01_ofbool b: is_01 (ofbool b). Proof. case b; constructor. Qed. (** any 01 expression is simple *) Lemma is_01_simple e: is_01 e -> is_simple e. Proof. induction 1; now constructor. Qed. (** * Coalgebraic structure of regular expressions *) (** ** epsilon membership *) Fixpoint epsilon (e: regex') := match e with | r_one | r_str _ => true | r_pls e f => epsilon e || epsilon f | r_dot e f => epsilon e && epsilon f | _ => false end%bool. Notation eps e := (@ofbool regex_ops regex_tt (epsilon e)). (** ** derivatives *) Fixpoint deriv a (e: regex'): regex' := match e with | r_zer | r_one => 0 | r_pls e f => deriv a e + deriv a f | r_dot e f => deriv a e ⋅ f + eps e ⋅ deriv a f | r_str e => deriv a e ⋅ (e: regex')^* | r_var b => ofbool (eqb_pos a b) end. (** word derivatives *) Fixpoint derivs w e := match w with | nil => e | cons a w => derivs w (deriv a e) end. (** set of variables appearing in a regular expression *) Fixpoint vars (e: regex') := match e with | r_zer | r_one => nil | r_pls e f | r_dot e f => app (vars e) (vars f) | r_str e => vars e | r_var a => cons a nil end. (** ** fundamental expansion theorem *) Theorem expand' (e: regex') A: vars e ≦ A -> e ≡ eps e + \sum_(a\in A) var a ⋅ deriv a e. Proof. induction e; simpl vars; intro HA; simpl deriv; fold_regex. + rewrite sup_b, cupxb; ra. + rewrite sup_b, cupxb; ra. + apply cup_spec in HA as [HA1 HA2]. simpl epsilon. setoid_rewrite orb_pls. setoid_rewrite dotxpls. rewrite supcup. rewrite IHe1 at 1 by assumption. rewrite IHe2 at 1 by assumption. simpl. fold_regex. ra. + apply cup_spec in HA as [HA1 HA2]. setoid_rewrite dotxpls. rewrite supcup. rewrite IHe1 at 1 by assumption. rewrite dotplsx. rewrite IHe2 at 1 by assumption. rewrite dotxpls. rewrite dotxsum, dotsumx. simpl epsilon. rewrite andb_dot. setoid_rewrite dot_ofboolx. setoid_rewrite dotA. ra. + specialize (IHe HA). clear HA. simpl epsilon. setoid_rewrite dotA. rewrite <-dotsumx. set (f := \sum_(i\in A) var i ⋅ deriv i e) in *. clearbody f. rewrite IHe. case epsilon; ra_normalise; rewrite ?str_pls1x; apply str_unfold_l. + setoid_rewrite cupbx. apply antisym. rewrite <- (leq_xsup _ _ a) by (apply HA; now left). rewrite eqb_refl. ra. apply leq_supx. intros b _. case eqb_spec; try intros <-; ra. Qed. Corollary expand e: e ≡ eps e + \sup_(a\in vars e) var a ⋅ deriv a e. Proof. apply expand'. reflexivity. Qed. (* not used currently ; can easily be proved directly *) Corollary deriv_cancel a e: var a ⋅ deriv a e ≦ e. Proof. rewrite (@expand' e ([a]++vars e)) at 2 by lattice. simpl. fold_regex. ra. Qed. (** ** monotonicity of [epsilon] *) (** trick to prove that epsilon is monotone: show that it's an evaluation into the boolean KA *) Lemma epsilon_eval e: epsilon e = eval (X:=bool_ops) (f':=fun _ => bool_tt) (fun _ => false) (to_expr e). Proof. induction e; simpl; trivial; now rewrite IHe1, IHe2. Qed. #[export] Instance epsilon_leq: Proper (leq ==> leq) epsilon. Proof. intros e f H. rewrite 2epsilon_eval. apply H. apply lower_laws. Qed. #[export] Instance epsilon_weq: Proper (weq ==> eq) epsilon := op_leq_weq_1. (** ** monotonicity of derivation *) (** we first need induction scheme for leq/weq *) Definition re_ind leq weq := mk_ops regex_unit (fun _ _ => lattice.mk_ops regex' leq weq r_pls r_pls id r_zer r_zer) (fun _ _ _ => r_dot) (fun _ => r_one) (fun _ => r_itr) (fun _ => r_str) (fun _ _ => assert_false r_str) (fun _ _ _ => assert_false r_dot) (fun _ _ _ => assert_false r_dot). Lemma re_ind_eval leq weq (e: regex'): eval (X:=re_ind leq weq) (f':=fun _ =>regex_tt) var (to_expr e) = e. Proof. induction e; simpl in *; reflexivity || congruence. Qed. Lemma leq_ind leq weq (L: laws BKA (re_ind leq weq)) (e f: regex'): e ≦ f -> leq e f. Proof. intro H. rewrite <-(re_ind_eval leq weq e), <-(re_ind_eval leq weq f). apply (H _ L). Qed. (** monotonicity of [deriv] *) #[export] Instance deriv_leq a: Proper (leq ==> leq) (deriv a). Proof. intros e f. apply (@leq_ind (fun e f => e ≦ f /\ deriv a e ≦ deriv a f) (fun e f => e ≡ f /\ deriv a e ≡ deriv a f)). constructor; simpl; (try discriminate); (repeat intros _); (repeat right); fold_regex; repeat intros _. constructor; simpl; (try discriminate); fold_regex; intros. - constructor. intro. split; reflexivity. intros ? ? ? [? ?] [? ?]; split; etransitivity; eassumption. - rewrite 2weq_spec. tauto. - rewrite 2cup_spec. tauto. - right. fold_regex. split; apply leq_bx. - split. apply dotA. rewrite andb_dot. ra. - split. apply dot1x. ra. - split. apply dotx1. ra. - intros x x' [Hx Hx'] y y' [Hy Hy']. split. now apply dot_leq. simpl; fold_regex. rewrite Hx', Hy', Hy. now rewrite Hx. - split. ra. rewrite orb_pls. ra. - split; ra. - split; ra. - split; ra. - split. apply str_refl. lattice. - split. apply str_cons. rewrite str_unfold_l. case epsilon; ra. - intros x z [H H']. split. now apply str_ind_l. apply str_ind_l in H. rewrite <-dotA, H, dot1x. hlattice. - intros x z [H H']. split. now apply str_ind_r. apply cup_spec in H' as [H1 H2]. rewrite dotA, H2. ra_normalise. now apply str_ind_r. - split. apply itr_str_l. reflexivity. Qed. #[export] Instance deriv_weq a: Proper (weq ==> weq) (deriv a) := op_leq_weq_1. #[export] Instance derivs_leq w: Proper (leq ==> leq) (derivs w). Proof. induction w; intros e f H. apply H. apply IHw, deriv_leq, H. Qed. #[export] Instance derivs_weq w: Proper (weq ==> weq) (derivs w) := op_leq_weq_1. (** ** deriving and expanding 01 regular expressions *) Lemma deriv_01 a e: is_01 e -> deriv a e ≡ 0. Proof. induction 1; simpl deriv; fold_regex. reflexivity. reflexivity. rewrite IHis_01_1, IHis_01_2. apply cupI. rewrite IHis_01_1, IHis_01_2. ra. rewrite IHis_01. apply dot0x. Qed. Lemma expand_01 e: is_01 e -> e ≡ eps e. Proof. intro H. rewrite (expand e) at 1. rewrite sup_b. apply cupxb. intros. rewrite deriv_01 by assumption. apply dotx0. Qed. (** ** `pure part' of a regular expression *) Fixpoint pure_part (e: regex'): regex' := match e with | r_pls e f => pure_part e + pure_part f | r_dot e f => eps e ⋅ pure_part f | r_str _ | r_one | r_zer => 0 | r_var _ => e end. Lemma is_pure_pure_part e: is_pure (pure_part e). Proof. induction e; simpl pure_part; constructor; trivial. apply is_01_ofbool. Qed. (** ** expanding simple regular expressions *) Lemma str_eps e: (eps e)^* ≡ 1. Proof. case epsilon. apply str1. apply str0. Qed. (* à dériver de [expand] ? *) Lemma expand_simple e: is_simple e -> e ≡ eps e + pure_part e. Proof. induction 1; simpl; fold_regex. lattice. lattice. lattice. rewrite orb_pls. rewrite IHis_simple1 at 1. rewrite IHis_simple2 at 1. lattice. rewrite andb_dot. rewrite (@expand_01 e) at 1 by assumption. rewrite IHis_simple at 1. ra. rewrite cupxb. rewrite (@expand_01 e) at 1 by assumption. apply str_eps. Qed. (** ** epsilon, derivatives, and expansion of pure regular expressions *) Lemma epsilon_pure e: is_pure e -> epsilon e = false. Proof. induction 1; trivial. simpl. now rewrite IHis_pure1. simpl. rewrite IHis_pure. now case epsilon. Qed. Lemma epsilon_deriv_pure a e: is_pure e -> eps (deriv a e) ≡ deriv a e. Proof. induction 1; simpl; fold_regex. reflexivity. rewrite <-expand_01. reflexivity. apply is_01_ofbool. rewrite orb_pls. now apply cup_weq. rewrite orb_pls, 2andb_dot, IHis_pure. rewrite deriv_01 by assumption. case (epsilon e); simpl; fold_regex; ra. Qed. Lemma expand_pure e A: is_pure e -> vars e ≦ A -> e ≡ \sum_(a \in A) var a ⋅ deriv a e . Proof. intros He HA. rewrite (expand' e HA) at 1. rewrite epsilon_pure by assumption. apply cupbx. Qed. (** additional properties *) Lemma deriv_sup a I J (f: I -> regex'): deriv a (\sup_(i\in J) f i) = \sup_(i\in J) deriv a (f i). Proof. apply f_sup_eq; now f_equal. Qed. Lemma epsilon_reflexive e: epsilon e -> 1 ≦ e. Proof. intro H. rewrite (expand e), H. lattice. Qed. (** * Language interpretation of regular expressions *) (** ** language of a regular expression, coalgebraically *) Definition lang e: lang' sigma := fun w => epsilon (derivs w e). #[export] Instance lang_leq: Proper (leq ==> leq) lang. Proof. intros e f H w. unfold lang. now rewrite H. Qed. #[export] Instance lang_weq: Proper (weq ==> weq) lang := op_leq_weq_1. (** (internal) characterisation of [epsilon] *) Lemma epsilon_iff_reflexive_eps (e: regex'): epsilon e <-> 1 ≦ eps e. Proof. case epsilon. intuition. intuition. discriminate. apply lang_leq in H. specialize (H [] eq_refl). discriminate. Qed. (** ** [lang] is a KA morphism *) (** to prove that [lang] is a morphism, we characterise it inductively, as the unique morphism such that [lang (e_var i)] is the language reduced to the letter [i], ([eq [i]]) *) Notation elang e := (eval (f':=fun _=>lang_tt) (fun i => eq [i]) (to_expr e)). (** language characterisation of [epsilon] *) Lemma epsilon_iff_lang_nil e: epsilon e <-> (elang e) []. Proof. induction e; simpl. firstorder discriminate. firstorder. setoid_rewrite Bool.orb_true_iff. now apply cup_weq. setoid_rewrite Bool.andb_true_iff. setoid_rewrite lang_dot_nil. now apply cap_weq. split. now exists O. reflexivity. firstorder discriminate. Qed. (** regular expression derivatives precisely correspond to language derivatives *) Lemma eval_deriv a e: elang (deriv a e) ≡ lang_deriv a (elang e). Proof. induction e; simpl deriv; simpl eval; fold_regex; fold_lang. - reflexivity. - now rewrite lang_deriv_1. - rewrite lang_deriv_pls. now apply cup_weq. - generalize (epsilon_iff_lang_nil e1). case epsilon; intro He1. setoid_rewrite dot1x. setoid_rewrite lang_deriv_dot_1. 2: now apply He1. now rewrite <- IHe1, <- IHe2. setoid_rewrite dot0x. setoid_rewrite lang_deriv_dot_2. 2: clear -He1; intuition discriminate. rewrite <- IHe1. apply cupxb. - rewrite lang_deriv_str. now rewrite <- IHe. - case eqb_spec. intros <- w. compute. split. now intros <-. now intro E; injection E. intros D w. compute. split. intros []. intro E. apply D. injection E. congruence. Qed. (** we deduce that both notions of language coincide: [lang] is the unique morphism from the coalgebra of regular expressions to the final coalgebra of languages *) Theorem lang_eval e: lang e ≡ elang e. Proof. unfold lang. intro w. revert e. induction w as [|a w IH]; intro e; simpl derivs. - apply epsilon_iff_lang_nil. - rewrite IH. apply eval_deriv. Qed. (** as a consequence, [lang] is a KA morphism *) Corollary lang_0: lang 0 ≡ bot. Proof. now rewrite lang_eval. Qed. Corollary lang_1: lang 1 ≡ 1. Proof. now rewrite lang_eval. Qed. Corollary lang_var i: lang (var i) ≡ eq [i]. Proof. now rewrite lang_eval. Qed. Corollary lang_pls e f: lang (e+f) ≡ lang e ⊔ lang f. Proof. now rewrite 3lang_eval. Qed. Corollary lang_sup I J (f: I -> _): lang (sup f J) ≡ \sup_(i\in J) lang (f i). Proof. apply f_sup_weq. apply lang_0. apply lang_pls. Qed. Corollary lang_dot e f: lang (e⋅f) ≡ lang e ⋅ lang f. Proof. now rewrite 3lang_eval. Qed. Corollary lang_itr e: lang (e^+) ≡ (lang e)^+. Proof. rewrite 2lang_eval. simpl (elang _). symmetry. apply itr_str_l. Qed. relation-algebra-v.1.7.9/theories/rel.v000066400000000000000000000140461440504774100200120ustar00rootroot00000000000000(** * rel: the main model of heterogeneous binary relations *) Require Bool. Require Export boolean prop. Require Import kat. Set Printing Universes. (** We fix a type universe U and show that heterogeneous relations between types in this universe form a kleene algebra. *) Universe U. Definition hrel (n m: Type@{U}) := n -> m -> Prop. Identity Coercion fun_of_hrel : hrel >-> Funclass. (** * Relations as a (bounded, distributive) lattice *) (** lattice operations and laws are obtained for free, by two successive pointwise liftings of the [Prop] lattice *) Canonical Structure hrel_lattice_ops n m := lattice.mk_ops (hrel n m) leq weq cup cap neg bot top. Global Instance hrel_lattice_laws n m: lattice.laws (BDL+STR+CNV+DIV) (hrel_lattice_ops n m) := pw_laws _. (** * Relations as a residuated Kleene allegory *) Section RepOps. Implicit Types n m p : Type@{U}. (** relational composition *) Definition hrel_dot n m p (x: hrel n m) (y: hrel m p): hrel n p := fun i j => exists2 k, x i k & y k j. (** converse (or transpose) *) Definition hrel_cnv n m (x: hrel n m): hrel m n := fun i j => x j i. (** left / right divisions *) Definition hrel_ldv n m p (x: hrel n m) (y: hrel n p): hrel m p := fun i j => forall k, x k i -> y k j. Definition hrel_rdv n m p (x: hrel m n) (y: hrel p n): hrel p m := fun j i => forall k, x i k -> y j k. Section i. Variable n: Type@{U}. Variable x: hrel n n. (** finite iterations of a relation *) Fixpoint iter u := match u with O => @eq _ | S u => hrel_dot _ _ _ x (iter u) end. (** Kleene star (reflexive transitive closure) *) Definition hrel_str: hrel n n := fun i j => exists u, iter u i j. (** strict iteration (transitive closure) *) Definition hrel_itr: hrel n n := hrel_dot n n n x hrel_str. End i. End RepOps. (** packing all operations into a monoid; note that the unit on [n] is just the equality on [n], i.e., the identity relation on [n] *) (** We need to eta-expand @eq here. This generates the universe constraint [U <= Coq.Init.Logic.8] (where the latter is the universe of the type argument to [eq]). Without the eta-expansion, the definition would yield the constraint [U = Coq.Init.Logig.8], which is too strong and leads to universe inconsistencies later on. *) Canonical Structure hrel_monoid_ops := monoid.mk_ops Type@{U} hrel_lattice_ops hrel_dot (fun n => @eq n) hrel_itr hrel_str hrel_cnv hrel_ldv hrel_rdv. (** binary relations form a residuated Kleene allegory *) #[export] Instance hrel_monoid_laws: monoid.laws (BDL+STR+CNV+DIV) hrel_monoid_ops. Proof. assert (dot_leq: forall n m p : Type@{U}, Proper (leq ==> leq ==> leq) (hrel_dot n m p)). intros n m p x y H x' y' H' i k [j Hij Hjk]. exists j. apply H, Hij. apply H', Hjk. constructor; (try now left); intros. apply hrel_lattice_laws. intros i j. firstorder. intros i j. firstorder congruence. intros i j. firstorder. intros i j. reflexivity. intros x y E i j. apply E. intros i j E. exists O. exact E. intros i k [j Hij [u Hjk]]. exists (S u). firstorder. assert (E: forall i, (iter n x i: hrel n n) ⋅ z ≦ z). induction i. simpl. firstorder now subst. rewrite <-H0 at 2. transitivity (x⋅((iter n x i: hrel n n)⋅z)). simpl. firstorder congruence. now apply dot_leq. intros i j [? [? ?] ?]. eapply E. repeat eexists; eauto. reflexivity. intros i k [[j Hij Hjk] Hik]. exists j; trivial. split; firstorder. split. intros E i j [k Hik Hkj]. apply E in Hkj. now apply Hkj. intros E i j Hij k Hki. apply E. firstorder. split. intros E i j [k Hik Hkj]. apply E in Hik. now apply Hik. intros E i j Hij k Hki. apply E. firstorder. Qed. (** * Relations as a Kleene algebra with tests *) (** "decidable" sets or predicates: Boolean functions Similar to [hrel_monoid_ops] we need to eta-expand the definition of dset to avoid forcing [U = pw] and obtain [U <= pw] instead *) Definition dset: ob hrel_monoid_ops -> lattice.ops := fun Y => pw_ops bool_lattice_ops Y. (** injection of decidable predicates into relations, as sub-identities *) Definition hrel_inj n (x: dset n): hrel n n := fun i j => i=j /\ x i. (** packing relations and decidable sets as a Kleene algebra with tests *) Canonical Structure hrel_kat_ops := kat.mk_ops hrel_monoid_ops dset hrel_inj. (** We need to impose the constraint [U < pw] before proving this lemma since otherwise we have [U = pw] afterwards. This leads to a universe inconsistency when trying load ugregex_dec, kat_completeness (as exported by kat_tac) and rel at the same time. *) Constraint U < pw. #[export] Instance hrel_kat_laws: kat.laws hrel_kat_ops. Proof. constructor. apply lower_laws. intro. apply (pw_laws (H:=lower_lattice_laws)). assert (inj_leq: forall n, Proper (leq ==> leq) (@hrel_inj n)). intros n e f H i j [E H']. split. assumption. revert H'. apply mm_bool_Prop, H. constructor; try discriminate. apply inj_leq. apply op_leq_weq_1. intros _ x y i j. split. intros [E H']. setoid_rewrite Bool.orb_true_iff in H'. destruct H'; [left|right]; split; assumption. intros [[E H']|[E H']]; split; trivial; setoid_rewrite Bool.orb_true_iff; now auto. intros _ i j. compute. intuition discriminate. intros ? i j. compute. tauto. intros ? p q i j. split. intros [E H']. setoid_rewrite Bool.andb_true_iff in H'. exists i; split; tauto. intros [k [ik Hi] [kj Hk]]. subst. split; trivial. setoid_rewrite Bool.andb_true_iff; now split. Qed. (** * Functional relations *) Definition frel {A B: Set} (f: A -> B): hrel A B := fun x y => y = f x. Lemma frel_comp {A B C: Set} (f: A -> B) (g: B -> C): frel f ⋅ frel g ≡ frel (fun x => g (f x)). Proof. apply antisym. intros x z [y -> ->]. reflexivity. simpl. intros x z ->. eexists; reflexivity. Qed. #[export] Instance frel_weq {A B}: Proper (pwr eq ==> weq) (@frel A B). Proof. unfold frel; split; intros ->; simpl. apply H. apply eq_sym, H. Qed. Ltac fold_hrel := ra_fold hrel_monoid_ops. Tactic Notation "fold_hrel" "in" hyp_list(H) := ra_fold hrel_monoid_ops in H. Tactic Notation "fold_hrel" "in" "*" := ra_fold hrel_monoid_ops in *. relation-algebra-v.1.7.9/theories/relalg.v000066400000000000000000000455411440504774100205020ustar00rootroot00000000000000(** * relalg: standard relation algebra facts and definitions *) Require Export common lattice monoid kleene normalisation rewriting. Set Implicit Arguments. Unset Strict Implicit. Unset Printing Implicit Defensive. (** generic lemmas *) Lemma dottx `{laws} `{TOP ≪ l} n m (x: X n m): x ≦ top⋅x. Proof. rewrite <-dot1x at 1. apply dot_leq; lattice. Qed. Lemma dotxt `{laws} `{TOP ≪ l} n m (x: X m n): x ≦ x⋅top. Proof. dual @dottx. Qed. Lemma top_nnm `{laws} `{TOP ≪ l} n m: top' n n ⋅ top' n m ≡ top' n m. Proof. apply leq_tx_iff. apply dottx. Qed. Lemma top_mnn `{laws} `{TOP ≪ l} n m: top' m n ⋅ top' n n ≡ top' m n. Proof. dual @top_nnm. Qed. Lemma disjoint_id `{laws} `{AL+BOT ≪ l} n m (p q: X n m): p ⊓ q ≦ 0 -> 1 ⊓ (p⋅q°) ≡ 0. Proof. intro Hpq. apply leq_xb_iff. rewrite capC, capxdot. ra_normalise. rewrite Hpq. ra. Qed. Lemma dedekind `{laws} `{AL ≪ l} n m p (x : X n m) (y : X m p) (z : X n p): x⋅y ⊓ z ≦ (x ⊓ (z⋅y°))⋅(y ⊓ (x°⋅z)). Proof. rewrite <-(capI z) at 1. rewrite capA, capdotx, capxdot. ra. Qed. (** algebraic properties of relations we use typeclasses to infer those properties automatically whenever possible typically, [rewrite transitive] will rewrite the first occurrence of a pattern [x⋅x] such that [x] is provably transitive. *) Ltac tc := solve [eauto with typeclass_instances]. Class is_nonempty {X: ops} n m (x: X n m) := nonempty: forall p q, top' p q ≦ top ⋅ x ⋅ top. Notation is_nonempty' m := (is_nonempty (one m)). Lemma nonempty_dom `{laws} `{TOP ≪ l} n m {x: X n m} {Hx: is_nonempty x}: is_nonempty' n. Proof. intros i j. rewrite nonempty. mrewrite (leq_xt (x⋅top' _ j)). ra. Qed. Lemma nonempty_cod `{laws} `{TOP ≪ l} n m {x: X n m} {Hx: is_nonempty x}: is_nonempty' m. Proof. intros i j. rewrite nonempty. rewrite (leq_xt (top' i _⋅x)). ra. Qed. Section props. Context {l: level} {X: ops}. Class is_reflexive n (x: X n n) := reflexive: 1 ≦ x. Class is_irreflexive n (x: X n n) := irreflexive: x ⊓ 1 ≡ 0. Class is_transitive n (x: X n n) := transitive: x ⋅ x ≦ x. Class is_linear n (x: X n n) := linear: x ⊔ x° ≡ top. Class is_symmetric n (x: X n n) := symmetric_: x° ≦ x. (* see below for [symmetric] *) Class is_antisymmetric n (x: X n n) := antisymmetric: x° ⊓ x ≦ 1. Class is_univalent n m (x: X n m) := univalent: x° ⋅ x ≦ 1. Class is_injective n m (x: X n m) := injective: x ⋅ x° ≦ 1. Class is_surjective n m (x: X n m) := surjective: 1 ≦ x° ⋅ x. Class is_total n m (x: X n m) := total: 1 ≦ x ⋅ x°. Class is_vector n m (v: X n m) := vector: v⋅top ≡ v. Class is_point n m (p: X n m) := { point_vector:> is_vector p; point_injective:> is_injective p; point_nonempty:> is_nonempty p}. Class is_atom n m (a: X n m) := { a_top_a': a⋅top⋅a° ≦ 1; a'_top_a: a°⋅top⋅a ≦ 1; atom_nonempty:> is_nonempty a}. Class is_mapping n m (f: X n m) := { mapping_univalent:> is_univalent f; mapping_total:> is_total f}. Class is_per n (e: X n n) := { per_symmetric:> is_symmetric e; per_transitive:> is_transitive e}. Class is_preorder n (p: X n n) := { pre_reflexive:> is_reflexive p; pre_transitive:> is_transitive p}. Class is_order n (p: X n n) := { ord_preorder:> is_preorder p; ord_antisymmetric:> is_antisymmetric p}. Context {L: laws l X}. (** all properties are compatible with equality *) Global Instance is_reflexive_weq {n}: Proper (weq ==> iff) (@is_reflexive n). Proof. intros ? ? E. unfold is_reflexive. now rewrite E. Qed. Global Instance is_irreflexive_weq {n} `{CAP ≪ l}: Proper (weq ==> iff) (@is_irreflexive n). Proof. intros ? ? E. unfold is_irreflexive. now rewrite E. Qed. Global Instance is_transitive_weq {n}: Proper (weq ==> iff) (@is_transitive n). Proof. intros ? ? E. unfold is_transitive. now rewrite E. Qed. Global Instance is_linear_weq `{CUP+CNV ≪ l} {n}: Proper (weq ==> iff) (@is_linear n). Proof. intros ? ? E. unfold is_linear. now rewrite E. Qed. Global Instance is_symmetric_weq `{CNV ≪ l} {n}: Proper (weq ==> iff) (@is_symmetric n). Proof. intros ? ? E. unfold is_symmetric. now rewrite E. Qed. Global Instance is_antisymmetric_weq `{AL ≪ l} {n}: Proper (weq ==> iff) (@is_antisymmetric n). Proof. intros ? ? E. unfold is_antisymmetric. now rewrite E. Qed. Global Instance is_univalent_weq `{CNV ≪ l} {n m}: Proper (weq ==> iff) (@is_univalent n m). Proof. intros ? ? E. unfold is_univalent. now rewrite E. Qed. Global Instance is_injective_weq `{CNV ≪ l} {n m}: Proper (weq ==> iff) (@is_injective n m). Proof. intros ? ? E. unfold is_injective. now rewrite E. Qed. Global Instance is_surjective_weq `{CNV ≪ l} {n m}: Proper (weq ==> iff) (@is_surjective n m). Proof. intros ? ? E. unfold is_surjective. now rewrite E. Qed. Global Instance is_total_weq `{CNV ≪ l} {n m}: Proper (weq ==> iff) (@is_total n m). Proof. intros ? ? E. unfold is_total. now rewrite E. Qed. Global Instance is_vector_weq {n m}: Proper (weq ==> iff) (@is_vector n m). Proof. intros ? ? E. unfold is_vector. now rewrite E. Qed. Global Instance is_nonempty_weq {n m}: Proper (weq ==> iff) (@is_nonempty X n m). Proof. intros ? ? E. unfold is_nonempty. now setoid_rewrite E. Qed. Lemma proper_weq_leq_iff n m (P: X n m -> Prop): Proper (weq ==> impl) P -> Proper (weq ==> iff) P. Proof. intros HP ? ? E. split; now apply HP. Qed. Global Instance is_point_weq `{CNV ≪ l} {n m}: Proper (weq ==> iff) (@is_point n m). Proof. apply proper_weq_leq_iff. intros ? ? E [? ? ?]. split; now rewrite <-E. Qed. Global Instance is_atom_weq `{CNV ≪ l} {n m}: Proper (weq ==> iff) (@is_atom n m). Proof. apply proper_weq_leq_iff. intros ? ? E [? ? ?]. split; now rewrite <-E. Qed. Global Instance is_mapping_weq `{CNV ≪ l} {n m}: Proper (weq ==> iff) (@is_mapping n m). Proof. apply proper_weq_leq_iff. intros ? ? E [? ?]. split; now rewrite <-E. Qed. Global Instance is_per_weq `{CNV ≪ l} {n}: Proper (weq ==> iff) (@is_per n). Proof. apply proper_weq_leq_iff. intros ? ? E [? ?]. split; now rewrite <-E. Qed. Global Instance is_preorder_weq {n}: Proper (weq ==> iff) (@is_preorder n). Proof. apply proper_weq_leq_iff. intros ? ? E [? ?]. split; now rewrite <-E. Qed. Global Instance is_order_weq `{AL ≪ l} {n}: Proper (weq ==> iff) (@is_order n). Proof. apply proper_weq_leq_iff. intros ? ? E [? ?]. split; now rewrite <-E. Qed. (** alternative characterisation of [is_surjective] *) Lemma surjective_tx `{TOP ≪ l} {n m} {x: X n m} {Hx: is_surjective x} p: top' p _ ⋅ x ≡ top. Proof. apply leq_tx_iff. transitivity (top' p _ ⋅ (x° ⋅ x)). rewrite <-surjective. ra. rewrite dotA. apply dot_leq; lattice. Qed. Lemma tx_surjective `{AL+TOP ≪ l} n m (x: X m n): top' n n ≦ top⋅x -> is_surjective x. Proof. intro E. unfold is_surjective. transitivity (1 ∩ (top' n m ⋅ x)). rewrite <-E. lattice. rewrite capC, capxdot. ra. Qed. (** basic properties *) Lemma symmetric `{CNV ≪ l} {n} {x: X n n} {Hx: is_symmetric x}: x° ≡ x. Proof. apply antisym. assumption. now cnv_switch. Qed. Lemma irreflexive' `{BL ≪ l} {n} {x: X n n} {Hx: is_irreflexive x}: x ≦ !1. Proof. now rewrite <-leq_cap_neg', Hx. Qed. Lemma vector' `{TOP ≪ l} {n m} {v: X n m} {Hv: is_vector v} x: v ⋅ x ≦ v. Proof. rewrite <-vector at 2. ra. Qed. Lemma top_nonempty `{TOP ≪ l} {n m p} {Hm: is_nonempty' m}: top' n m ⋅ top' m p ≡ top. Proof. apply leq_tx_iff. rewrite nonempty. ra. Qed. (** instances for proof search *) Global Instance point_surjective `{AL+TOP ≪ l} {n m} {p: X n m} {Hp: is_point p}: is_surjective p | 1. Proof. apply tx_surjective. rewrite nonempty at 1. now mrewrite vector. Qed. Global Instance atom_injective `{TOP ≪ l} {n m} {a: X n m} {Ha: is_atom a}: is_injective a. Proof. unfold is_injective. rewrite <-a_top_a'. rewrite <-(leq_xt 1). ra. Qed. Global Instance atom_univalent `{TOP ≪ l} {n m} {a: X n m} {Ha: is_atom a}: is_univalent a. Proof. unfold is_univalent. rewrite <-a'_top_a. rewrite <-(leq_xt 1). ra. Qed. Global Instance is_symmetric_neg1 `{BL+CNV ≪ l} {n}: is_symmetric (!one n). Proof. unfold is_symmetric. rewrite <-dotx1. apply Schroeder_. rewrite negneg. ra. Qed. Global Instance irreflexive_cnv `{AL+BOT ≪ l} {n} {x: X n n} {Hx: is_irreflexive x}: is_irreflexive (x°). Proof. unfold is_irreflexive. cnv_switch. now ra_normalise. Qed. Global Instance reflexive_cnv `{CNV ≪ l} {n} {x: X n n} {Hx: is_reflexive x}: is_reflexive (x°). Proof. unfold is_reflexive. cnv_switch. now ra_normalise. Qed. Global Instance transitive_cnv `{CNV ≪ l} {n} {x: X n n} {Hx: is_transitive x}: is_transitive (x°). Proof. unfold is_transitive. cnv_switch. now ra_normalise. Qed. Global Instance symmetric_cnv `{CNV ≪ l} {n} {x: X n n} {Hx: is_symmetric x}: is_symmetric (x°). Proof. unfold is_symmetric. now cnv_switch. Qed. Global Instance antisymmetric_cnv `{AL ≪ l} {n} {x: X n n} {Hx: is_antisymmetric x}: is_antisymmetric (x°). Proof. unfold is_antisymmetric. now rewrite cnv_invol, capC. Qed. Global Instance injective_cnv `{CNV ≪ l} {n m} {x: X n m} {Hx: is_univalent x}: is_injective (x°). Proof. unfold is_injective. now rewrite cnv_invol. Qed. Global Instance univalent_cnv `{CNV ≪ l} {n m} {x: X n m} {Hx: is_injective x}: is_univalent (x°). Proof. unfold is_univalent. now rewrite cnv_invol. Qed. Global Instance surjective_cnv `{CNV ≪ l} {n m} {x: X n m} {Hx: is_total x}: is_surjective (x°). Proof. unfold is_surjective. now rewrite cnv_invol. Qed. Global Instance total_cnv `{CNV ≪ l} {n m} {x: X n m} {Hx: is_surjective x}: is_total (x°). Proof. unfold is_total. now rewrite cnv_invol. Qed. Global Instance preorder_cnv `{CNV ≪ l} {n} {x: X n n} {Hx: is_preorder x}: is_preorder (x°). Proof. constructor; tc. Qed. Global Instance nonempty_cnv `{CNV+TOP ≪ l} {n m} {x: X n m} {Hx: is_nonempty x}: is_nonempty (x°). Proof. intros i j. cnv_switch. ra_normalise. apply Hx. Qed. Global Instance atom_cnv `{CNV+TOP ≪ l} {n m} {x: X n m} {Hx: is_atom x}: is_atom (x°). Proof. split. rewrite cnv_invol. apply a'_top_a. rewrite cnv_invol. apply a_top_a'. apply nonempty_cnv. Qed. Global Instance mapping_cnv `{AL+TOP ≪ l} {n m} {x: X n m} {Hx: is_point x}: is_mapping (x°). Proof. split; tc. Qed. (* actually just need x to be injective and surjective *) Global Instance order_cnv `{AL ≪ l} {n} {x: X n n} {Hx: is_order x}: is_order (x°). Proof. constructor; tc. Qed. Global Instance vector_cap `{CAP+TOP ≪ l} {n m} {v w: X n m} {Hv: is_vector v} {Hw: is_vector w}: is_vector (v ⊓ w). Proof. unfold is_vector. apply antisym. 2: apply dotxt. now rewrite dotcapx, 2vector. Qed. Global Instance preorder_str `{STR ≪ l} n (x: X n n): is_preorder (x^*). Proof. split. apply str_refl. apply weq_leq, str_trans. Qed. Global Instance symmetric_str `{STR+CNV ≪ l} {n} {x: X n n} {Hx: is_symmetric x}: is_symmetric (x^*). Proof. unfold is_symmetric. now rewrite cnvstr, symmetric. Qed. Global Instance reflexive_itr `{STR ≪ l} {n} {x: X n n} {Hx: is_reflexive x}: is_reflexive (x^+). Proof. unfold is_reflexive. rewrite reflexive. apply itr_ext. Qed. Global Instance transitive_itr `{STR ≪ l} n (x: X n n): is_transitive (x^+). Proof. apply itr_trans. Qed. Global Instance symmetric_itr `{STR+CNV ≪ l} {n} {x: X n n} {Hx: is_symmetric x}: is_symmetric (x^+). Proof. unfold is_symmetric. now rewrite cnvitr, symmetric. Qed. (** lemmas about relations of a specific shape *) Lemma itr_transitive `{STR ≪ l} n (x: X n n): is_transitive x -> x^+ ≡ x. Proof. intro. apply antisym. now apply itr_ind_l1. apply itr_ext. Qed. Lemma str_transitive `{KA ≪ l} n (x: X n n): is_transitive x -> x^* ≡ 1+x. Proof. intro. now rewrite str_itr, itr_transitive. Qed. Lemma dot_mono `{AL ≪ l} n (x y: X n n): x ≦ 1 -> y ≦ 1 -> x⋅y ≡ x ⊓ y. Proof. intros Hx Hy. apply antisym. apply leq_xcap. rewrite Hy; ra. rewrite Hx; ra. transitivity (x⋅1 ⊓ y). ra. rewrite capdotx. rewrite Hx at 2. ra_normalise. apply dot_leq; lattice. Qed. Lemma kernel_refl_antisym `{laws} `{CAP+CNV ≪ l} {n} {x: X n n} {Hr: is_reflexive x} {Ha: is_antisymmetric x}: x° ⊓ x ≡ 1. Proof. apply antisym. assumption. apply cap_spec; split; trivial. now rewrite <-reflexive. Qed. Lemma dot_univalent_cap `{AL ≪ l} {n m p} {x: X n m} {y z: X m p} {E: is_univalent x}: x ⋅ (y ∩ z) ≡ (x⋅y) ∩ (x⋅z). Proof. apply antisym. ra. rewrite capdotx. mrewrite univalent. ra. Qed. Lemma univalent_antisym `{AL+TOP ≪ l} {n m} {x y: X n m} {Hy: is_univalent y}: y⋅top' m m ≦ x⋅top -> x ≦ y -> x ≡ y. Proof. intros Ht Hxy. apply antisym. assumption. transitivity (y ⊓ (x⋅(top' m m))). rewrite <- Ht, <- dotxt. lattice. rewrite capC, capdotx. ra_normalise. rewrite Hxy at 2. mrewrite univalent. ra. Qed. Lemma surjective_injective_antisym `{AL+TOP ≪ l} {n m} {p q: X n m} {Hp: is_surjective p} {Hq: is_injective q}: p ≦ q -> p ≡ q. Proof. intros Hpq. cnv_switch. apply univalent_antisym. cnv_switch; ra_normalise. rewrite surjective_tx. lattice. now cnv_switch. Qed. Lemma disjoint_vect_iff `{BL+CNV ≪ l} {n m} {p q: X n m} {Hq: is_vector q}: p ⊓ q ≦ 0 <-> q°⋅p ≦ 0. Proof. rewrite Schroeder_l, cnv_invol, negbot. rewrite vector, capC. apply leq_cap_neg'. Qed. Lemma disjoint_vect_iff' `{AL+DIV+BOT+TOP ≪ l} {n m} {p q: X n m} {Hq: is_vector q}: p ⊓ q ≦ 0 <-> q°⋅p ≦ 0. Proof. split; intro Hpq. rewrite <-capxt, capdotx, cnv_invol, vector, Hpq. ra. rewrite <-ldv_spec in Hpq. rewrite capC, Hpq. rewrite <-vector at 1. rewrite capdotx. rewrite ldv_cancel. ra. Qed. Lemma gen_point {Hl: CNV+TOP ≪ l} n m k (p: X n m): is_nonempty' k -> is_point p -> is_point (p⋅top' m k). Proof. intros Hk Hp. split. unfold is_vector. now mrewrite top_mnn. unfold is_injective. ra_normalise. mrewrite (leq_xt (top' m k ⋅ top' k m)). mrewrite vector. apply injective. intros i j. mrewrite (top_nonempty (n:=m) (m:=k) (p:=j)). apply nonempty. Qed. Lemma leq_xyp `{AL+TOP ≪ l} {n m k} {p: X m k} {x: X n k} {y: X n m} {Hp: is_point p}: x ≦ y⋅p <-> x⋅p° ≦ y. Proof. split; intro E. rewrite <-(dotx1 y). rewrite <-injective. now mrewrite <-E. rewrite <-(dotx1 x). rewrite surjective. now mrewrite E. Qed. Lemma leq_pxq `{AL+TOP ≪ l} {n m k} {p: X n k} {q: X m k} {x: X n m} {Hp: is_point p} {Hq: is_point q}: p ≦ x⋅q <-> q ≦x°⋅p. Proof. rewrite 2leq_xyp. now rewrite cnv_leq_iff', cnvdot, cnv_invol. Qed. Lemma point_lattice_atom {Hl: AL+TOP ≪ l} {n m} {p v: X n m} {Hp: is_point p} {Hv: is_vector v}: is_nonempty v -> v ≦ p -> v ≡ p. Proof. intros Hv' Hvp. apply antisym. assumption. assert (is_nonempty (p ∩ v)). rewrite leq_iff_cap in Hvp. now rewrite capC, Hvp. apply leq_iff_cap. cnv_switch. ra_normalise. apply univalent_antisym. 2: lattice. cnv_switch. ra_normalise. rewrite (leq_xt (top⋅p)). rewrite nonempty. now rewrite <-dotA, vector. Qed. Lemma dot_neg_inj {Hl: BL+CNV ≪ l} {n m p} {x: X n m} {y: X m p} {Hy: is_injective y}: !x ⋅ y ≦ !(x⋅y). Proof. rewrite Schroeder_r, 2negneg. mrewrite injective. ra. Qed. Lemma dot_neg_surj {Hl: BL+CNV ≪ l} {n m p} {x: X n m} {y: X m p} {Hy: is_surjective y}: !(x⋅y) ≦ !x ⋅ y. Proof. rewrite leq_cap_neg. rewrite <-negcup, <-dotplsx, cupneg. now rewrite surjective_tx, negtop. Qed. Lemma dot_neg_point {Hl: BL+CNV ≪ l} {n m k} {x: X n m} {p: X m k} {Hp: is_point p}: !x ⋅ p ≡ !(x⋅p). Proof. apply antisym. apply dot_neg_inj. apply dot_neg_surj. Qed. Lemma disjoint_vect_ext {Hl: BL+CNV ≪ l} {n m k} {x y: X n m} {x' y': X m k} {Hx: is_vector x}: x ⊓ y ≦ 0 -> (x ⋅ x') ⊓ (y ⋅ y') ≦ 0. Proof. rewrite capC. intro H. apply disjoint_vect_iff in H. rewrite capdotx. mrewrite H. ra. Qed. Lemma atom_of_points_aux `{AL+TOP ≪ l} {n m k} {p: X n m} {q: X k m} {Hp: is_point p} {Hq: is_point q}: p ⋅ q° ⋅ top ⋅ q ⋅ p° ≦ 1. Proof. mrewrite surjective_tx. transitivity (p⋅(top⋅q)°⋅p°). ra. mrewrite surjective_tx. ra_normalise. rewrite vector. now apply injective. Qed. Lemma atom_of_points `{AL+TOP ≪ l} {n m k} {p: X n m} {q: X k m} {Hp: is_point p} {Hq: is_point q}: is_atom (p⋅q°). Proof. split. ra_normalise. apply atom_of_points_aux. ra_normalise. apply atom_of_points_aux. intros i j. mrewrite (surjective_tx (x:=p)). now apply nonempty_cnv. Qed. Lemma point_a_top `{CNV+TOP ≪ l} {n m} {a: X n m} {Ha: is_atom a}: is_point (a⋅top' m m). Proof. split. unfold is_vector. now mrewrite top_nnm. unfold is_injective. ra_normalise. mrewrite top_nnm. apply a_top_a'. intros i j. mrewrite top_nnm. apply atom_nonempty. Qed. Lemma point_a'_top `{CNV+TOP ≪ l} {n m} {a: X n m} {Ha: is_atom a}: is_point (a°⋅top' n n). Proof. apply point_a_top. Qed. Lemma a_top_a_aux `{AL+TOP ≪ l} {n m} {a: X n m} {Ha: is_atom a}: (a ⋅ top) ∩ (top ⋅ a) ≡ a. Proof. apply antisym. rewrite capdotx. mrewrite a'_top_a. ra. rewrite <-2(leq_xt 1). ra. Qed. Lemma a_top_a `{AL+TOP ≪ l} {n m} {a: X n m} {Ha: is_atom a}: a ⋅ top ⋅ a ≡ a. Proof. rewrite <-a_top_a_aux at 3. apply antisym. rewrite <-(leq_xt (top⋅a)), <-(leq_xt (a⋅top)). ra. rewrite capdotx. mrewrite <-(leq_xt (a°⋅top' n n)). ra. Qed. Global Instance atom_transitive `{AL+TOP ≪ l} {n} {a: X n n} {Ha: is_atom a}: is_transitive a. Proof. unfold is_transitive. rewrite <-a_top_a at 3. rewrite <-(leq_xt 1). ra. Qed. (* TOTHINK: transitivity should follow from mono *) Lemma atom_mono `{AL+TOP ≪ l} {n} {a: X n n} {Ha: is_atom a}: a⋅a ≦ 1. Proof. transitivity (a⋅a ⊓ a). apply leq_xcap. reflexivity. apply atom_transitive. rewrite dedekind. transitivity ((a⋅a°)⋅(a°⋅a)). apply dot_leq; lattice. mrewrite (injective (x:=a)). mrewrite (univalent (x:=a)). ra. Qed. Lemma atom_points `{AL+TOP ≪ l} {n m k} {a: X n m} {Ha: is_atom a} {Hk: is_nonempty' k}: exists p q: X _ k, is_point p /\ is_point q /\ a ≡ p⋅q°. Proof. exists (a⋅top). exists (a°⋅top). split. rewrite <-top_nnm, dotA. apply gen_point. assumption. apply point_a_top. split. rewrite <-top_nnm, dotA. apply gen_point. assumption. apply point_a'_top. ra_normalise. mrewrite (top_nonempty (n:=m) (p:=n)). now rewrite a_top_a. Qed. Lemma atom_lattice_atom {Hl: AL+TOP ≪ l} {n m} {a x: X n m} {Ha: is_atom a}: is_nonempty x -> x ≦ a -> x ≡ a. Proof. intros Hx Hax. pose proof point_a_top as Hat. assert(Hxt: is_vector (x ⋅ top' m m)). unfold is_vector. now mrewrite top_nnm. apply univalent_antisym. apply weq_geq. apply point_lattice_atom. intros i j. mrewrite top_nnm. apply nonempty. now rewrite Hax. assumption. Qed. End props. (** lemmas obtained by duality *) Lemma total_xt `{laws} `{TOP ≪ l} {n m} {x: X n m} {Hx: is_total x} p: x⋅top' _ p ≡ top. Proof. now dual @surjective_tx. Qed. Lemma xt_total `{laws} `{AL+TOP ≪ l} n m (x: X n m): top' n n ≦ x⋅top -> is_total x. Proof. now dual @tx_surjective. Qed. Lemma dot_cap_injective `{laws} `{AL ≪ l} {n m p} {x: X m n} {y z: X p m} {Hx: is_injective x}: (y ∩ z) ⋅ x ≡ (y⋅x) ∩ (z⋅x). Proof. revert Hx. dual @dot_univalent_cap. Qed. relation-algebra-v.1.7.9/theories/rewriting.v000066400000000000000000000106361440504774100212430ustar00rootroot00000000000000(** * rewriting: additional rewriting support *) Require Import monoid. (** * rewriting modulo associativity of [dot] *) (** We notice that to rewrite modulo A, it suffices to normalise associativity, and to use an extended lemma: for instance, if one wants to rewrite using a closed hypothesis [H: a_1⋅...⋅a_n ≡ c] in a goal including a subterm like [d⋅e*a_1⋅...⋅a_n⋅f], then one can simply rewrite using [(ext_weq_n H): forall x, x⋅a_1⋅...⋅a_n ≡ x⋅c] where ext_weq_n is the appropriate lemma (see rewriting.v). Such a lemma could be generated by hand, but it's a bit heavy, so that we simply hardwire it for n=2,3,4 This trick generalises to "open" equations, like [H: forall x y, P x y -> forall z, y⋅(x+z)⋅y ≡ y] where one wants to rewrite using [(fun x y Hxy z => ext_weq_3 (H x y Hxy z))] The ML plugin [mrewrite] generates such abstractions in the appropriate way, taking care efficiently of the order in which one wants to rewrite, and whether we have an equation or an inequation. (Doing so in Ltac is both painful and inefficient.) Of course the method is incomplete (e.g., if [y] has to be instantiated by a product), but it seems enough for most common situations. The advantage over using the AAC_tactics library is that it's much faster since reification is "syntactic", and that it works for typed structures and heterogeneous terms, which are not supported in AAC_tactics. *) Lemma ext_leq_2 `{laws} {n m p} (x: X n m) (y: X m p) v: x⋅y ≦ v -> forall o (u: X o n), u⋅x⋅y ≦ u⋅v. Proof. intros E ? ?. now rewrite <-E, !dotA. Qed. Lemma ext_leq_3 `{laws} {n m p q} (x: X n m) (y: X m p) (z: X p q) v: x⋅y⋅z ≦ v -> forall o (u: X o n), u⋅x⋅y⋅z ≦ u⋅v. Proof. intros E ? ?. now rewrite <-E, !dotA. Qed. Lemma ext_leq_4 `{laws} {n m p q r} (x: X n m) (y: X m p) (z: X p q) (t: X q r) v: x⋅y⋅z⋅t ≦ v -> forall o (u: X o n), u⋅x⋅y⋅z⋅t ≦ u⋅v. Proof. intros E ? ?. now rewrite <-E, !dotA. Qed. Lemma ext_weq_2 `{laws} {n m p} (x: X n m) (y: X m p) v: x⋅y ≡ v -> forall o (u: X o n), u⋅x⋅y ≡ u⋅v. Proof. intros E ? ?. now rewrite <-E, !dotA. Qed. Lemma ext_weq_3 `{laws} {n m p q} (x: X n m) (y: X m p) (z: X p q) v: x⋅y⋅z ≡ v -> forall o (u: X o n), u⋅x⋅y⋅z ≡ u⋅v. Proof. intros E ? ?. now rewrite <-E, !dotA. Qed. Lemma ext_weq_4 `{laws} {n m p q r} (x: X n m) (y: X m p) (z: X p q) (t: X q r) v: x⋅y⋅z⋅t ≡ v -> forall o (u: X o n), u⋅x⋅y⋅z⋅t ≡ u⋅v. Proof. intros E ? ?. now rewrite <-E, !dotA. Qed. Lemma ext_leq_2' `{laws} {n m p} (x: X n m) (y: X m p) v: v ≦ x⋅y -> forall o (u: X o n), u⋅v ≦ u⋅x⋅y. Proof. intros E ? ?. now rewrite E, !dotA. Qed. Lemma ext_leq_3' `{laws} {n m p q} (x: X n m) (y: X m p) (z: X p q) v: v ≦ x⋅y⋅z -> forall o (u: X o n), u⋅v ≦ u⋅x⋅y⋅z. Proof. intros E ? ?. now rewrite E, !dotA. Qed. Lemma ext_leq_4' `{laws} {n m p q r} (x: X n m) (y: X m p) (z: X p q) (t: X q r) v: v ≦ x⋅y⋅z⋅t -> forall o (u: X o n), u⋅v ≦ u⋅x⋅y⋅z⋅t. Proof. intros E ? ?. now rewrite E, !dotA. Qed. Lemma ext_weq_2' `{laws} {n m p} (x: X n m) (y: X m p) v: v ≡ x⋅y -> forall o (u: X o n), u⋅v ≡ u⋅x⋅y. Proof. intros E ? ?. now rewrite E, !dotA. Qed. Lemma ext_weq_3' `{laws} {n m p q} (x: X n m) (y: X m p) (z: X p q) v: v ≡ x⋅y⋅z -> forall o (u: X o n), u⋅v ≡ u⋅x⋅y⋅z. Proof. intros E ? ?. now rewrite E, !dotA. Qed. Lemma ext_weq_4' `{laws} {n m p q r} (x: X n m) (y: X m p) (z: X p q) (t: X q r) v: v ≡ x⋅y⋅z⋅t -> forall o (u: X o n), u⋅v ≡ u⋅x⋅y⋅z⋅t. Proof. intros E ? ?. now rewrite E, !dotA. Qed. Declare ML Module "coq-relation-algebra.mrewrite". (** User-end rewriting tactics *) Tactic Notation "mrewrite" constr(H) := rewrite ?dotA; (rewrite H || ra_extend (fun H => rewrite H) ->H); [rewrite ?dotA|..]. Tactic Notation "mrewrite" "<-" constr(H) := rewrite ?dotA; (rewrite <-H || ra_extend (fun H => rewrite <-H) <-H); [rewrite ?dotA|..]. Tactic Notation "mrewrite" constr(H) "in" hyp(H') := rewrite ?dotA in H'; (rewrite H in H' || ra_extend (fun H => rewrite H in H') ->H); rewrite ?dotA in H'. Tactic Notation "mrewrite" "<-" constr(H) "in" hyp(H') := rewrite ?dotA in H'; (rewrite <-H in H' || ra_extend (fun H => rewrite <-H in H') <-H); rewrite ?dotA in H'. relation-algebra-v.1.7.9/theories/rewriting_aac.v000066400000000000000000000033671440504774100220520ustar00rootroot00000000000000(** * rewriting_aac: bridge with AAC_tactics *) Require Import monoid. From AAC_tactics Require Export AAC. Section lattice. Context `{lattice.laws}. Global Instance aac_cupA `{CUP ≪ l} : Associative weq cup := cupA. Global Instance aac_cupC `{CUP ≪ l} : Commutative weq cup := cupC. Global Instance aac_cupI `{CUP ≪ l} : Idempotent weq cup := cupI. Global Instance aac_cupU `{BOT+CUP ≪ l} : Unit weq cup bot := Build_Unit _ _ _ cupbx cupxb. Global Instance aac_capA `{CAP ≪ l} : Associative weq cap := capA. Global Instance aac_capC `{CAP ≪ l} : Commutative weq cap := capC. Global Instance aac_capI `{CAP ≪ l} : Idempotent weq cap := capI. Global Instance aac_capU `{TOP+CAP ≪ l} : Unit weq cap top := Build_Unit _ _ _ captx capxt. Global Instance aac_lift_leq_weq : AAC_lift leq weq. Proof. constructor; eauto with typeclass_instances. Qed. End lattice. Section monoid. Context `{monoid.laws} {n: ob X}. Global Instance aac_dotA: Associative weq (dot n n n) := (@dotA _ _ _ n n n n). Global Instance aac_dotU: Unit weq (dot n n n) (one n). Proof. constructor; intro. apply dot1x. apply dotx1. Qed. End monoid. (* (* tests *) Require Import kleene. Goal forall `{laws} `{BKA ≪ l} n (a b c: X n n), a+b ≡ c -> (forall x: X n n, x⋅x ≡ x) -> a⋅b+b+1⋅a+(b+0)^* ≡ a⋅b⋅c⋅b⋅c⋅a+0. Proof. intros. aac_normalise. aac_rewrite H1. aac_rewrite H2 in_right. Abort. Require Import rel. Goal forall (a b c: hrel nat nat), a+b ≡ c -> (forall x: hrel nat nat, x⋅x ≡ x) -> a⋅b+b+1⋅a+(b+0)^* ≡ a⋅b⋅c⋅b⋅c⋅a+0. Proof. intros. aac_rewrite H. aac_rewrite H0 in_right. aac_normalise. (* TOFIX: can we prevent the unfoldings? *) ra_fold hrel_monoid_ops nat. (* TOFIX: incomplete folding *) Abort. *) relation-algebra-v.1.7.9/theories/rmx.v000066400000000000000000000257021440504774100200370ustar00rootroot00000000000000(** * rmx: matrices of regular expressions *) Require Import monoid regex lset boolean sups matrix_ext normalisation. Set Implicit Arguments. Unset Printing Implicit Defensive. (** [rmx n m] denote the set of [(n,m)]-matrices of regular expressions *) Notation rmx n m := (mx regex' n m). (** they form a Kleene algebra (with bottom element) *) #[export] Instance rmx_lattice_laws n m: lattice.laws BKA (mx_lattice_ops regex_lattice_ops n m) := mx_lattice_laws n m. #[export] Instance rmx_laws: laws BKA (mx_ops regex_ops regex_tt) := mx_laws regex_tt. (** * Set of variables occurring in a matrix *) Definition mx_vars n m (M: rmx n m) := \sup_(i eps e)). Lemma epsilon_mx_pls n m (M N: rmx n m): epsilon_mx (M+N) ≡ epsilon_mx M + epsilon_mx N. Proof. intros i j. apply orb_pls. Qed. Lemma epsilon_sup I J (f: I -> regex'): eps (\sup_(i\in J) f i) ≡ \sup_(i\in J) eps (f i). Proof. induction J. reflexivity. simpl. fold_regex. now rewrite <-IHJ, orb_pls. Qed. Lemma epsilon_mx_dot n m p (M: rmx n m) (N: rmx m p): epsilon_mx (M⋅N) ≡ epsilon_mx M ⋅ epsilon_mx N. Proof. intros i j. simpl. unfold epsilon_mx, mx_map, mx_dot. rewrite epsilon_sup. now setoid_rewrite andb_dot. Qed. #[export] Instance epsilon_mx_weq n m: Proper (weq ==> weq) (@epsilon_mx n m). Proof. intros M N H i j. unfold epsilon_mx, mx_map. now rewrite (H i j). Qed. (** [epsilon_mx] commutes with Kleene star on matrices *) Lemma epsilon_mx_str: forall n (M: rmx n n), epsilon_mx (M^*) ≡ (epsilon_mx M)^*. Proof. apply (mx_str_ind' (fun n M sM => epsilon_mx sM ≡ (epsilon_mx M)^*)). - intros n ? ? H ? ? H'. now rewrite H, H'. - intros ? i. elim (ord_0_empty i). - intro M. rewrite mx_str_1. intros i j. unfold epsilon_mx, scal_mx, mx_scal, mx_map. now rewrite str_eps. - intros n m a b c d e be ec f fbe ecf He Hf. unfold epsilon_mx. rewrite 2mx_map_blk, mx_str_blk. fold (epsilon_mx). assert (H1: (epsilon_mx (a+be⋅c))^* ≡ (epsilon_mx a + epsilon_mx b ⋅ (epsilon_mx d)^* ⋅ epsilon_mx c) ^*) by (unfold be; rewrite epsilon_mx_pls, 2epsilon_mx_dot, He; reflexivity). apply blk_mx_weq. + rewrite Hf. exact H1. + unfold fbe, be. now rewrite 2epsilon_mx_dot, He, Hf, H1. + unfold ecf, ec. now rewrite 2epsilon_mx_dot, He, Hf, H1. + unfold ecf, be, ec. now rewrite epsilon_mx_pls, 4epsilon_mx_dot, He, Hf, H1. Qed. (** * Pointwise extension of derivatives to matrices *) Notation deriv_mx a M := (mx_map (deriv a) M). Lemma deriv_mx_pls a n m (M N: rmx n m): deriv_mx a (M+N) ≡ deriv_mx a M + deriv_mx a N. Proof. reflexivity. Qed. Lemma deriv_mx_dot a n m p (M: rmx n m) (N: rmx m p): deriv_mx a (M⋅N) ≡ deriv_mx a M ⋅ N + epsilon_mx M ⋅ deriv_mx a N. Proof. intros i j. setoid_rewrite deriv_sup. simpl deriv; fold_regex. apply supcup. Qed. #[export] Instance deriv_mx_weq a n m: Proper (weq ==> weq) (@mx_map _ _ (deriv a) n m). Proof. apply mx_map_weq. Qed. (** [deriv_mx] commutes with Kleene star on "strict" matrices, those whose epsilon matrix is empty *) Lemma deriv_mx_str_strict a: forall n (M: rmx n n), epsilon_mx M ≡ 0 -> deriv_mx a (M^*) ≡ deriv_mx a M ⋅ M^*. Proof. refine (mx_str_ind' (fun n M sM => epsilon_mx M ≡ 0 -> deriv_mx a sM ≡ deriv_mx a M ⋅ sM) _ _ _ _). intros n ? ? H ? ? H'. now rewrite H, H'. intros M _ i. elim (ord_0_empty i). intros M _ i j. setoid_rewrite ord0_unique. symmetry. apply cupxb. rename a into l. intros n m a b c d e be ec f fbe ecf He Hf HM. rewrite 2mx_map_blk, mx_dot_blk. unfold epsilon_mx in HM. rewrite mx_map_blk in HM. apply blk_mx_0 in HM as (Ha&Hb&Hc&Hd). assert (Hf': epsilon_mx (a+be⋅c) ≡ 0). rewrite epsilon_mx_pls, epsilon_mx_dot, Ha, Hc; ra. specialize (He Hd). specialize (Hf Hf'). unfold be in Hf. rewrite deriv_mx_pls, <-dotA, deriv_mx_dot, Hb, dot0x, cupxb in Hf. assert (Hecf: deriv_mx l ecf ≡ deriv_mx l c ⋅ f + deriv_mx l d ⋅ ecf). unfold ecf at 1, ec. rewrite <-dotA, deriv_mx_dot. rewrite He. unfold e at 2. rewrite epsilon_mx_str, Hd. rewrite deriv_mx_dot, Hc. unfold ecf, ec. ra. apply blk_mx_weq. - rewrite Hf. unfold ecf, ec. ra. - unfold fbe. rewrite deriv_mx_dot. unfold f at 2; rewrite epsilon_mx_str, Hf'. unfold be at 2. rewrite deriv_mx_dot. rewrite Hb. rewrite Hf. unfold ecf, be, ec. ra. - apply Hecf. - rewrite deriv_mx_pls, deriv_mx_dot, He, Hecf. unfold ecf at 2, ec. rewrite 2epsilon_mx_dot, Hc. unfold fbe. ra. Qed. (** * Pointwise predicates and operations on matrices *) (** generic definitions ans lemmas *) Definition mx_forall (f: _ -> Prop) n m (M: rmx n m) := forall i j, f (M i j). Lemma mx_forall_row f n m1 m2 M11 M21: mx_forall f M11 -> mx_forall f M21 -> mx_forall f (@row_mx _ n m1 m2 M11 M21). Proof. repeat intro. unfold row_mx. now case ordinal.split. Qed. Lemma mx_forall_col f n1 n2 m M11 M21: mx_forall f M11 -> mx_forall f M21 -> mx_forall f (@col_mx _ n1 n2 m M11 M21). Proof. repeat intro. unfold col_mx. now case ordinal.split. Qed. Lemma mx_forall_blk f n1 n2 m1 m2 M11 M12 M21 M22: mx_forall f M11 -> mx_forall f M12 -> mx_forall f M21 -> mx_forall f M22 -> mx_forall f (@blk_mx _ n1 n2 m1 m2 M11 M12 M21 M22). Proof. intros. now apply mx_forall_col; apply mx_forall_row. Qed. #[export] Hint Resolve mx_forall_blk mx_forall_row mx_forall_col: mx_predicates. (** 01, simple, and pure matrices *) Notation is_01_mx := (mx_forall is_01). Notation is_simple_mx := (mx_forall is_simple). Notation is_pure_mx := (mx_forall is_pure). (** taking the pure part of a matrix *) Definition pure_part_mx := (mx_map pure_part). (** the above classes are preserved by supremums *) Lemma is_01_sup I J (f: I -> regex'): (forall i, List.In i J -> is_01 (f i)) -> is_01 (\sup_(i \in J) f i). Proof. apply P_sup; now constructor. Qed. Lemma is_simple_sup I J (f: I -> regex'): (forall i, List.In i J -> is_simple (f i)) -> is_simple (\sup_(i\in J) f i). Proof. apply P_sup; now constructor. Qed. Lemma is_pure_sup I J (f: I -> regex'): (forall i, List.In i J -> is_pure (f i)) -> is_pure (\sup_(i\in J) f i). Proof. apply P_sup; now constructor. Qed. (** ** 01 matrices *) Lemma is_01_mx_zer n m: is_01_mx (0: rmx n m). Proof. constructor. Qed. Lemma is_01_mx_one n: is_01_mx (1: rmx n n). Proof. repeat intro. simpl. unfold mx_one. case ordinal.eqb_ord; constructor. Qed. Lemma is_01_mx_cup n m (M N: rmx n m): is_01_mx M -> is_01_mx N -> is_01_mx (M+N). Proof. repeat intro. now constructor. Qed. Lemma is_01_mx_dot n m p (M: rmx n m) (N: rmx m p): is_01_mx M -> is_01_mx N -> is_01_mx (M⋅N). Proof. repeat intro. apply is_01_sup. now constructor. Qed. Lemma is_01_mx_scal e: is_01 e -> is_01_mx (scal_mx e). Proof. now repeat intro. Qed. Lemma is_01_scal_mx M: is_01_mx M -> is_01 (mx_scal M). Proof. intro. apply H. Qed. Lemma is_01_mx_sub00 n1 n2 m1 m2 M: is_01_mx M -> is_01_mx (@sub00_mx _ n1 n2 m1 m2 M). Proof. intros H i j. apply H. Qed. Lemma is_01_mx_sub01 n1 n2 m1 m2 M: is_01_mx M -> is_01_mx (@sub01_mx _ n1 n2 m1 m2 M). Proof. intros H i j. apply H. Qed. Lemma is_01_mx_sub10 n1 n2 m1 m2 M: is_01_mx M -> is_01_mx (@sub10_mx _ n1 n2 m1 m2 M). Proof. intros H i j. apply H. Qed. Lemma is_01_mx_sub11 n1 n2 m1 m2 M: is_01_mx M -> is_01_mx (@sub11_mx _ n1 n2 m1 m2 M). Proof. intros H i j. apply H. Qed. #[export] Hint Resolve is_01_mx_zer is_01_mx_one is_01_mx_cup is_01_mx_dot is_01_mx_scal is_01_scal_mx is_01_mx_sub00 is_01_mx_sub01 is_01_mx_sub10 is_01_mx_sub11: mx_predicates. Lemma is_01_mx_str n (M: rmx n n): is_01_mx M -> is_01_mx (M^*). Proof. revert M. induction n; intros M HM. assumption. simpl. unfold mx_str_build. change (S n) with (1+n)%nat. ra_fold (mx_ops regex_ops regex_tt). auto 13 using is_01_str with mx_predicates. Qed. Lemma is_01_mx_epsilon n m (M: rmx n m): is_01_mx (epsilon_mx M). Proof. repeat intro. apply is_01_ofbool. Qed. #[export] Hint Resolve is_01_mx_str is_01_mx_epsilon: mx_predicates. (** ** simple matrices *) (** any 01 matrix is simple *) Lemma is_01_simple_mx n m (M: rmx n m): is_01_mx M -> is_simple_mx M. Proof. repeat intro. now apply is_01_simple. Qed. #[export] Hint Resolve is_01_simple_mx: mx_predicates. Lemma is_simple_mx_var v: is_simple_mx (scal_mx (var v)). Proof. constructor. Qed. Lemma is_simple_mx_pls n m (M N: rmx n m): is_simple_mx M -> is_simple_mx N -> is_simple_mx (M+N). Proof. now constructor. Qed. Lemma is_simple_mx_dot n m p (M: rmx n m) (N: rmx m p): is_01_mx M -> is_simple_mx N -> is_simple_mx (M⋅N). Proof. repeat intro. apply is_simple_sup. now constructor. Qed. #[export] Hint Resolve is_simple_mx_var is_simple_mx_pls is_simple_mx_dot: mx_predicates. (** ** pure matrices *) Lemma is_pure_mx_zer n m: is_pure_mx (0: rmx n m). Proof. constructor. Qed. Lemma is_pure_mx_var v: is_pure_mx (scal_mx (var v)). Proof. constructor. Qed. Lemma is_pure_mx_pls n m (M N: rmx n m): is_pure_mx M -> is_pure_mx N -> is_pure_mx (M+N). Proof. now constructor. Qed. Lemma is_pure_mx_dot n m p (M: rmx n m) (N: rmx m p): is_01_mx M -> is_pure_mx N -> is_pure_mx (M⋅N). Proof. repeat intro. apply is_pure_sup. now constructor. Qed. Lemma is_pure_pure_part_mx n m (M: rmx n m): is_pure_mx (pure_part_mx M). Proof. repeat intro. apply is_pure_pure_part. Qed. #[export] Hint Resolve is_pure_mx_zer is_pure_mx_var is_pure_mx_pls is_pure_mx_dot is_pure_pure_part_mx: mx_predicates. (** ** deriving and expanding various classes of matrices *) Lemma deriv_01_mx a n m (M: rmx n m): is_01_mx M -> deriv_mx a M ≡ 0. Proof. intros HM i j. apply deriv_01, HM. Qed. Lemma expand_01_mx n m (M: rmx n m): is_01_mx M -> M ≡ epsilon_mx M. Proof. intros H i j. now apply expand_01. Qed. Lemma expand_simple_mx n m (M: rmx n m): is_simple_mx M -> M ≡ epsilon_mx M + pure_part_mx M. Proof. intros H i j. now apply expand_simple. Qed. Lemma epsilon_mx_pure n m (M: rmx n m): is_pure_mx M -> epsilon_mx M ≡ 0. Proof. intros HM i j. unfold epsilon_mx, mx_map. rewrite epsilon_pure. reflexivity. apply HM. Qed. Lemma epsilon_deriv_pure_mx a n m (M: rmx n m): is_pure_mx M -> epsilon_mx (deriv_mx a M) ≡ deriv_mx a M. Proof. intros H i j. apply epsilon_deriv_pure, H. Qed. (** * From 01 row matrices to sets of ordinals, and back *) Definition of_row n (u: rmx 1 n): ord (pow2 n) := set.of_fun (fun i => epsilon (u ord0 i)). Definition to_row n (x: ord (pow2 n)): rmx 1 n := fun _ i => ofbool (set.mem x i). Lemma mem_of_row n i (u: rmx 1 n): is_01_mx u -> forall j, ofbool (set.mem (of_row u) j) ≡ u i j. Proof. intros Hu j. unfold of_row. rewrite set.mem_of_fun. symmetry. setoid_rewrite ord0_unique. apply expand_01, Hu. Qed. Lemma is_01_mx_to_row n x: is_01_mx (@to_row n x). Proof. intros ? ?. apply is_01_ofbool. Qed. #[export] Hint Resolve is_01_mx_to_row: mx_predicates. relation-algebra-v.1.7.9/theories/srel.v000066400000000000000000000234751440504774100202030ustar00rootroot00000000000000(** * srel: heterogeneous binary relations between setoids *) (** counterpart to module [rel], where we provide a KAT model of relations on setoids relations have to be invariant under the setoid equalities, the identity relation is defined as the setoid equality rather than [Logic.eq], Kleene star is defined accordingly. *) Require Bool. Require Export boolean prop. Require Import kat rel. (** base type for setoids ; if there is standard one in the standard library, we might want to switch to it it could also be nice to have [lattice.weq] set up using such a structure, to share notation and lemmas *) Record EqType := { type_of:> Type@{U}; Eq: relation type_of; Equivalence_Eq: Equivalence Eq }. #[export] Existing Instance Equivalence_Eq. (* note: would be natural to try to define a counterpart to [lattice.pw_ops] for equality preserving funtions from A to X (using the following record type as carrier) *) Record spw (A: EqType) (X: lattice.ops) := { spw_bod:> A->X; spw_Eq: Proper (Eq A ==> weq) spw_bod }. (* Definition spw_bod' A X := (spw_bod A X: spw A X -> lattice.pw_ops X A). Coercion spw_bod': spw >-> car. #[export] Existing Instance spw_Eq. *) (* but: - this requires laws on X (preservation of weq) to lift the operations, e.g., the lift of [cup] requires [cup_weq] in order to preserve [Eq A] - this would force us to use [Proper (Eq A ==> weq)] rather than [pwr A weq] as equality in order to obtain [Proper (Eq A ==> Eq A ==> weq)] for relations, which is possibly not so convenient *) (** * setoid-preserving relations as a lattice *) (** setoid-preserving relations *) Record srel (n m: EqType) := { hrel_of:> hrel n m; srel_Eq: Proper (Eq n ==> Eq m ==> iff) hrel_of }. Arguments hrel_of {_ _}. (* TOTHINK: only useful locally? *) Coercion car_of n m: srel n m -> hrel_lattice_ops n m := hrel_of. #[export] Existing Instance srel_Eq. (** setoid-preserving relations as a lattice; actually a sublattice of that of plain relations *) Section s. Variables n m: EqType. Definition srel_leq (R S: srel n m): Prop := R ≦ S. Definition srel_weq (R S: srel n m): Prop := R ≡ S. Program Definition srel_cup (R S: srel n m): srel n m := {| hrel_of := R ⊔ S |}. Next Obligation. now rewrite H, H0. Qed. Program Definition srel_cap (R S: srel n m): srel n m := {| hrel_of := R ⊓ S |}. Next Obligation. now rewrite H, H0. Qed. Program Definition srel_neg (R: srel n m): srel n m := {| hrel_of := !R |}. Next Obligation. now rewrite H, H0. Qed. Program Definition srel_bot: srel n m := {| hrel_of := bot |}. Next Obligation. tauto. Qed. Program Definition srel_top: srel n m := {| hrel_of := top |}. Next Obligation. tauto. Qed. Canonical Structure srel_lattice_ops: lattice.ops := {| car := srel n m; leq := srel_leq; weq := srel_weq; cup := srel_cup; cap := srel_cap; neg := srel_neg; bot := srel_bot; top := srel_top |}. Arguments srel_leq _ _ /. Arguments srel_weq _ _ /. Arguments srel_cup _ _ /. Arguments srel_cap _ _ /. Arguments srel_neg _ /. Arguments srel_bot /. Arguments srel_top /. (** lattices laws follow from the fact that we have a sublattice of plain relations *) #[export] Instance srel_lattice_laws: lattice.laws (BDL+STR+CNV+DIV) srel_lattice_ops. Proof. apply (laws_of_injective_morphism hrel_of); trivial. now split. Qed. End s. (** * setoid-preserving relations as a Kleene category *) (** not a subcategory of plain relations: we have to modify [1] and [x^*] *) Section RepOps. Implicit Types n m p : EqType. Program Definition srel_one n: srel n n := {| hrel_of := Eq n |}. Program Definition srel_dot n m p (x: srel n m) (y: srel m p): srel n p := {| hrel_of := x⋅y |}. Next Obligation. split; intros [z H1 H2]. rewrite H in H1. rewrite H0 in H2. now exists z. rewrite <-H in H1. rewrite <-H0 in H2. now exists z. Qed. Program Definition srel_cnv n m (x: srel n m): srel m n := {| hrel_of := x° |}. Next Obligation. unfold hrel_cnv. now rewrite H, H0. Qed. Program Definition srel_ldv n m p (x: srel n m) (y: srel n p): srel m p := {| hrel_of := x -o y |}. Next Obligation. unfold hrel_ldv. setoid_rewrite H. setoid_rewrite H0. reflexivity. Qed. Program Definition srel_rdv n m p (x: srel m n) (y: srel p n): srel p m := {| hrel_of := y o- x |}. Next Obligation. unfold hrel_rdv. setoid_rewrite H. setoid_rewrite H0. reflexivity. Qed. Section i. Variable n: EqType. Variable x: srel n n. (** finite iterations of a relation *) Fixpoint iter u: srel n n := match u with O => srel_one n | S u => srel_dot _ _ _ x (iter u) end. Program Definition srel_str: srel n n := {| hrel_of i j := exists u, iter u i j |}. Next Obligation. setoid_rewrite H. setoid_rewrite H0. reflexivity. Qed. Definition srel_itr: srel n n := srel_dot n n n x srel_str. End i. Arguments srel_dot [_ _ _] _ _/. Arguments srel_ldv [_ _ _] _ _/. Arguments srel_rdv [_ _ _] _ _/. Arguments srel_cnv [_ _] _ /. Arguments srel_str [_] _ /. Arguments srel_itr [_] _ /. Arguments srel_one {_} /. End RepOps. Canonical Structure srel_monoid_ops := monoid.mk_ops EqType srel_lattice_ops srel_dot srel_one srel_itr srel_str srel_cnv srel_ldv srel_rdv. (** we cannot use [monoid.laws_of_faithful_functor]: [1] and [_^*] are not preserved by [rel_of]; nevertheless, we can reuse [hrel_monoid_laws] for quite a few axioms; and we just reprove the remaining ones *) #[export] Instance srel_monoid_laws: monoid.laws (BDL+STR+CNV+DIV) srel_monoid_ops. Proof. constructor; (try now left); intros. apply srel_lattice_laws. apply (dotA (laws:=hrel_monoid_laws)). intros i j. split. intros [k ik kj]. simpl in ik; now rewrite ik. intro. exists i. simpl; reflexivity. assumption. apply (cnvdot_ (laws:=hrel_monoid_laws)). apply (cnv_invol (laws:=hrel_monoid_laws)). intros i j ij. now apply (cnv_leq (laws:=hrel_monoid_laws)). intros i j E. exists O. exact E. intros i k [j Hij [u Hjk]]. exists (S u). now exists j. assert (E: forall i, (iter n x i: srel n n) ⋅ z ≦ z). induction i. simpl. intros h k [l hl lk]. simpl in hl. now rewrite hl. rewrite <-H0 at 2. transitivity (x⋅((iter n x i: srel n n)⋅z)). cbn. firstorder congruence. now apply (dot_leq (H:=hrel_monoid_laws)). intros i j [? [? ?] ?]. eapply E. repeat eexists; eauto. reflexivity. apply (capdotx (laws:=hrel_monoid_laws)). apply (ldv_spec (laws:=hrel_monoid_laws)). apply (rdv_spec (laws:=hrel_monoid_laws)). Qed. #[export] Instance Equivalence_srel_one A: Equivalence (@one srel_monoid_ops A). Proof. cbn. typeclasses eauto. Qed. (** * setoid-preserving relations as a Kleene category with tests *) (** setoid-preserving tests as a Boolean lattice; we redo everything, for lack of nice generic operations on [spw] *) Section tests. Variable A: ob srel_monoid_ops. Record dset := { dset_bod:> A -> bool; dset_Eq: Proper (Eq A ==> eq) dset_bod; }. (* TOTHINK: only useful locally? *) Coercion car_of_dset f: lattice.pw_ops bool_lattice_ops A := dset_bod f. #[export] Existing Instance dset_Eq. (** all operations ar imported from [lattice.pw_ops] *) Definition dset_leq (R S: dset): Prop := R ≦ S. Definition dset_weq (R S: dset): Prop := R ≡ S. Program Definition dset_cup (R S: dset): dset := {| dset_bod := R ⊔ S |}. Next Obligation. now rewrite H. Qed. Program Definition dset_cap (R S: dset): dset := {| dset_bod := R ⊓ S |}. Next Obligation. now rewrite H. Qed. Program Definition dset_neg (R: dset): dset := {| dset_bod := !R |}. Next Obligation. now rewrite H. Qed. Program Definition dset_bot: dset := {| dset_bod := bot |}. Program Definition dset_top: dset := {| dset_bod := top |}. Canonical Structure dset_lattice_ops: lattice.ops := {| car := dset; leq := dset_leq; weq := dset_weq; cup := dset_cup; cap := dset_cap; neg := dset_neg; bot := dset_bot; top := dset_top |}. Arguments dset_leq _ _ /. Arguments dset_weq _ _ /. Arguments dset_cup _ _ /. Arguments dset_cap _ _ /. Arguments dset_neg _ /. Arguments dset_bot /. Arguments dset_top /. (** lattices laws follow from the fact that we have a sublattice of plain tests *) #[export] Instance dset_lattice_laws: lattice.laws (BL+STR+CNV+DIV) dset_lattice_ops. Proof. apply (laws_of_injective_morphism dset_bod); trivial. now split. Qed. Program Definition srel_inj (x: dset): srel A A := {|hrel_of i j := Eq A i j /\ x i|}. Next Obligation. now rewrite H,H0. Qed. End tests. (** packing everything as a Kleene category with tests *) Canonical Structure srel_kat_ops := kat.mk_ops srel_monoid_ops dset_lattice_ops srel_inj. (** like for monoid laws, we have to reprove everything since we don't have a sub-KAT of that of plain relations *) #[export] Instance srel_kat_laws: kat.laws srel_kat_ops. Proof. constructor. apply lower_laws. intro. simpl. apply lower_lattice_laws. assert (inj_leq: forall n, Proper (leq ==> leq) (@srel_inj n)). intros n e f H i j [E H']. split. assumption. revert H'. apply mm_bool_Prop, H. constructor; try discriminate. apply inj_leq. apply op_leq_weq_1. intros _ x y i j. split. intros [E H']. setoid_rewrite Bool.orb_true_iff in H'. destruct H'; [left|right]; split; assumption. intros [[E H']|[E H']]; split; trivial; setoid_rewrite Bool.orb_true_iff; now auto. intros _ i j. compute. intuition discriminate. intros ? i j. compute. tauto. intros ? p q i j. split. intros [E H']. setoid_rewrite Bool.andb_true_iff in H'. exists i; split; try tauto. reflexivity. intros [k [ik Hi] [kj Hk]]. subst. split; trivial. now transitivity k. setoid_rewrite Bool.andb_true_iff; split; trivial. now rewrite ik. Qed. Ltac fold_srel := ra_fold srel_monoid_ops. Tactic Notation "fold_srel" "in" hyp_list(H) := ra_fold srel_monoid_ops in H. Tactic Notation "fold_srel" "in" "*" := ra_fold srel_monoid_ops in *. relation-algebra-v.1.7.9/theories/sums.v000066400000000000000000000023221440504774100202110ustar00rootroot00000000000000(** * sums: finite sums, a la ssreflect *) (** this tiny module extends [sups] to the setting of monoids (it is a separate module just to avoid [sups] to depend on [monoid]) *) Require Import monoid. Require Export sups. (** in the setting of monoids, we prefer the "sum" notation *) Notation "\sum_ ( i \in l ) f" := (@sup (mor _ _) _ (fun i => f) l) (at level 41, f at level 41, i, l at level 50, format "'[' \sum_ ( i \in l ) '/ ' f ']'"): ra_terms. Notation "\sum_ ( i < n ) f" := (\sum_(i \in seq n) f) (at level 41, f at level 41, i, n at level 50, format "'[' \sum_ ( i < n ) '/ ' f ']'"): ra_terms. (** [dot] distributes over sums *) Lemma dotxsum `{laws} `{BSL ≪ l} I J n m p (f: I -> X m n) (x: X p m): x ⋅ (\sum_(i\in J) f i) ≡ \sum_(i\in J) (x ⋅ f i). Proof. apply f_sup_weq. apply dotx0. intros; apply dotxpls. Qed. Lemma dotsumx `{laws} `{BSL ≪ l} I J n m p (f: I -> X n m) (x: X m p): (\sum_(i\in J) f i) ⋅ x ≡ \sum_(i\in J) (f i ⋅ x). Proof. dual @dotxsum. Qed. (** converse commutes with sums *) Lemma cnvsum `{laws} `{BSL+CNV ≪ l} I J n m (f: I -> X n m): (\sum_(i\in J) f i)° ≡ \sum_(i \in J) (f i)°. Proof. apply f_sup_weq. apply cnv0. apply cnvpls. Qed. relation-algebra-v.1.7.9/theories/sups.v000066400000000000000000000173751440504774100202320ustar00rootroot00000000000000(** * sups: finite joins (or supremums), a la ssreflect *) (** We define a few operations for manipulating finite supremums or intersections. We basically follow the scheme proposed for "bigops" in ssreflect, but we simplify it as much as possible since we do not need the whole machinery. The two main simplifications are: - the fact that we restrict ourselves to the associative, commutative, and idempotent operation [cup] of lattices (intersections being obtained by working in the dual lattices) - the fact that we do not include a "selection" operator *) Require Import lset lattice. Require Export ordinal. Section s. Context `{L:laws} `{Hl:BSL ≪ l}. Universe S. Section i. Context {I: Type@{S}}. (** * Supremums *) (** the unique operator which we define is the following one, which intuitively corresponds to [fold_right cup (map f J) bot], we redefine it to get a better behaviour with [simpl] *) (** sup f [j1;...;jn] = f j1 ⊔ ... ⊔ f jn *) Fixpoint sup (f: I -> X) J := match J with | nil => bot | cons i J => f i ⊔ sup f J end. (** sup specification *) Lemma sup_spec f J x: sup f J ≦ x <-> forall i, In i J -> f i ≦ x. Proof. induction J; simpl. split. tauto. intro. lattice. rewrite cup_spec, IHJ. clear IHJ. intuition. now subst. Qed. (** ** basic facts about [sup] *) Lemma sup_app f h k: sup f (h++k) ≡ sup f h ⊔ sup f k. Proof. induction h; simpl. lattice. rewrite IHh. hlattice. Qed. Lemma sup_singleton f i: sup f (i::nil) ≡ f i. Proof. simpl. lattice. Qed. Lemma leq_supx f J x: (forall i, In i J -> f i ≦ x) -> sup f J ≦ x. Proof. apply sup_spec. Qed. Lemma leq_xsup f J i: In i J -> f i ≦ sup f J. Proof. now apply sup_spec. Qed. Lemma leq_xsup' f J i x: In i J -> x ≦ f i -> x ≦ sup f J. Proof. intros ? E. rewrite E. now apply leq_xsup. Qed. (** [sup] is monotone, w.r.t, both the function [f] and the set [J] *) Global Instance sup_leq: Proper (pwr leq ==> leq ==> leq) sup. Proof. intros f f' Hf J J' HJ. induction J. apply leq_bx. simpl. apply leq_cupx. rewrite Hf. apply leq_xsup. apply HJ. now left. apply IHJ. intros j ?. apply HJ. now right. Qed. Global Instance sup_weq: Proper (pwr weq ==> weq ==> weq) sup. Proof. simpl. setoid_rewrite weq_spec. split; apply sup_leq; firstorder. Qed. Lemma supcup f g J: sup (fun i => f i ⊔ g i) J ≡ sup f J ⊔ sup g J. Proof. induction J; simpl. lattice. rewrite IHJ. lattice. Qed. (** refined monotonicity result: the functions have to be pointwise comparable only on the elements of [J] *) Lemma sup_leq' J J' (f f': I -> X): J ≦J' -> (forall i, In i J -> f i ≦ f' i) -> sup f J ≦ sup f' J'. Proof. induction J; intros HJ Hf. apply leq_bx. simpl. apply leq_cupx. rewrite Hf. apply leq_xsup. apply HJ. now left. now left. apply IHJ. rewrite <- HJ. clear; firstorder. clear -Hf; firstorder. Qed. Lemma sup_weq' J J' (f f': I -> X): J ≡J' -> (forall i, In i J -> f i ≡ f' i) -> sup f J ≡ sup f' J'. Proof. setoid_rewrite weq_spec. split; apply sup_leq'; firstorder. Qed. (** the sup of empty elements is still empty *) Lemma sup_b J (f: I -> X) (Hf: forall i, In i J -> f i ≡ bot): sup f J ≡ bot. Proof. apply antisym. 2: apply leq_bx. apply leq_supx. intros. now rewrite Hf. Qed. End i. (** ** swapping and reindexing indices *) Theorem sup_swap I J (f: I -> J -> X) I' J': sup (fun i => sup (fun j => f i j) J') I' ≡ sup (fun j => sup (fun i => f i j) I') J'. Proof. induction I'; simpl. apply antisym. apply leq_bx. apply leq_supx; trivial. now rewrite IHI', supcup. Qed. Lemma sup_map I J (f: J -> X) (m: I -> J) I': sup f (map m I') = sup (fun i => f (m i)) I'. Proof. induction I'; simpl; congruence. Qed. End s. (** ** notations *) (** we use "\sup_(i\in l) f" in the general case *) Notation "\sup_ ( i \in l ) f" := (sup (fun i => f) l) (at level 41, f at level 41, i, l at level 50, format "'[' \sup_ ( i \in l ) '/ ' f ']'"): ra_terms. (** and "\sup_(i Y): (f bot ≡ bot) -> (forall x y, f (x ⊔ y) ≡ f x ⊔ f y) -> forall I J (g: I -> X), f (sup g J) ≡ \sup_(i\in J) f (g i). Proof. intros Hbot Hcup I J g. induction J. apply Hbot. simpl. rewrite Hcup. now apply cup_weq. Qed. Lemma f_sup_eq {X Y: ops} (f: X -> Y): (f bot = bot) -> (forall x y, f (x ⊔ y) = f x ⊔ f y) -> forall I J (g: I -> X), f (sup g J) = \sup_(i\in J) f (g i). Proof. intros Hbot Hcup I J g. induction J. apply Hbot. simpl. rewrite Hcup. congruence. Qed. (** same thing, to prove that a predicate is preserved under supremums *) Lemma P_sup {X: ops} {P: X -> Prop} I J (f: I -> X): P bot -> (forall x y, P x -> P y -> P (x ⊔ y)) -> (forall i, In i J -> P (f i)) -> P (sup f J). Proof. intros Hbot Hcup. induction J; intro H; simpl. apply Hbot. apply Hcup. apply H; now left. apply IHJ. intros. apply H. now right. Qed. (** cutting a supremum over ordinals of size [n+m] *) Lemma sup_cut `{L:laws} `{BSL ≪ l} n m f: \sup_(i X) A (J: A -> list I) h: sup f (sup J h) ≡ sup (fun a => sup f (J a)) h. Proof. induction h. reflexivity. simpl. now rewrite sup_app, IHh. Qed. (** belonging to a finite union *) Lemma in_sup A I J (f: I -> list A) a: In a (sup f J) <-> exists i, In i J /\ In a (f i). Proof. induction J; simpl. firstorder. rewrite in_app_iff, IHJ. clear. firstorder congruence. Qed. (** link between [map] and [sup] *) Lemma map_sup A I J (f: I -> A): map f J = \sup_(i\in J) [f i]. Proof. induction J; simpl; congruence. Qed. (** distribution of meets over supremums *) Lemma capxsup `{laws} `{BSL+CAP ≪ l} I J (f: I -> X) (x: X): x ⊓ (\sup_(i\in J) f i) ≡ \sup_(i\in J) (x ⊓ f i). Proof. apply f_sup_weq. apply capxb. intros; apply capcup. Qed. Lemma capsupx `{laws} `{BSL+CAP ≪ l} I J (f: I -> X) (x: X): (\sup_(i\in J) f i) ⊓ x ≡ \sup_(i\in J) (f i ⊓ x). Proof. rewrite capC, capxsup. now setoid_rewrite capC at 1. Qed. (** * Infimum (or intersections) *) (** obtained for free, by duality *) Notation inf f l := (@sup (dual _) _ f l). Notation "\inf_ ( i \in l ) f" := (inf (fun i => f) l) (at level 41, f at level 41, i, l at level 50, format "'[' \inf_ ( i \in l ) '/ ' f ']'"): ra_terms. Notation "\inf_ ( i < n ) f" := (\inf_(i \in seq n) f) (at level 41, f at level 41, i, n at level 50, format "'[' \inf_ ( i < n ) '/ ' f ']'"): ra_terms. Section inf. Context `{laws} `{CAP+TOP ≪ l} {I: Type}. Global Instance inf_leq: Proper (pwr (@leq X) ==> leq --> @leq X) (@sup (dual X) I). Proof. intros ? ? ? ? ?. now dual @sup_leq. Qed. Lemma inf_spec (f: I -> X) J (x: X): x ≦ \inf_(i\in J) f i <-> forall i, In i J -> x ≦ f i. Proof. dual @sup_spec. Qed. Lemma inf_singleton (f: I -> X) i: inf f (i::nil) ≡ f i. Proof. dual @sup_singleton. Qed. Lemma leq_xinf (f: I -> X) J x: (forall i, In i J -> x ≦ f i) -> x ≦ inf f J. Proof. dual @leq_supx. Qed. Lemma leq_infx (f: I -> X) J i: In i J -> @leq X (inf f J) (f i). Proof. dual @leq_xsup. Qed. Lemma leq_infx' (f: I -> X) J i x: In i J -> f i ≦ x -> @leq X (inf f J) x. Proof. dual @leq_xsup'. Qed. End inf. relation-algebra-v.1.7.9/theories/syntax.v000066400000000000000000000343711440504774100205610ustar00rootroot00000000000000(** * syntax: syntactic model for types structures (monoid operations) *) Require Export positives comparisons. Require Import Eqdep. Require Import monoid. Set Implicit Arguments. Set Asymmetric Patterns. (** * Free syntactic model *) Section s. Notation I := positive. Variable A : Set. (* [A=positive] for normalisation tactics [A=positive+lsyntax.expr (ord n)] for KAT proofs [A=positive+lsyntax.expr positive] for KAT computations (not yet) *) Variables s t: A -> I. (** residuated Kleene allegory expressions over a set [A] of variables, the variables being typed according to the functions [s] (source) and [t] (target). The indexing types ([I]) are fixed to be [positive] numbers: - this is convenient and efficient in practice, for reification - we need a decidable type to get the untyping theorems without axioms Note that we include constructors for flat operations (cup,cap,bot,top,neg): it is not convenient to reuse [lsyntax.expr] here since we would need to restrict the alphabet under those nodes to something like [A_(n,m) = { a: A / s a = n /\ t a = m }]. *) (* TOTHINK: add a phantom/strong dependency on [l] on the expr type ? *) Inductive expr: I -> I -> Type := | e_zer: forall n m, expr n m | e_top: forall n m, expr n m | e_one: forall n, expr n n | e_pls: forall n m, expr n m -> expr n m -> expr n m | e_cap: forall n m, expr n m -> expr n m -> expr n m | e_neg: forall n m, expr n m -> expr n m (* | e_not: forall n, expr n n -> expr n n *) | e_dot: forall n m p, expr n m -> expr m p -> expr n p | e_itr: forall n, expr n n -> expr n n | e_str: forall n, expr n n -> expr n n | e_cnv: forall n m, expr n m -> expr m n | e_ldv: forall n m p, expr n m -> expr n p -> expr m p | e_rdv: forall n m p, expr m n -> expr p n -> expr p m | e_var: forall a, expr (s a) (t a). (** level of an expression: the set of operations that appear in that expression *) Fixpoint e_level n m (x: expr n m): level := match x with | e_zer _ _ => BOT | e_top _ _ => TOP | e_one _ => MIN | e_pls _ _ x y => CUP + e_level x + e_level y | e_cap _ _ x y => CAP + e_level x + e_level y | e_neg _ _ x => BL + e_level x (* negation is ill-defined without the other Boolean operations, whence the [BL] rather than [NEG] *) | e_dot _ _ _ x y => e_level x + e_level y | e_itr _ x => STR + e_level x | e_str _ x => STR + e_level x | e_cnv _ _ x => CNV + e_level x | e_ldv _ _ _ x y | e_rdv _ _ _ x y => DIV + e_level x + e_level y | e_var a => MIN end%level. Section e. Context {X: ops} {f': I -> ob X}. Variable f: forall a, X (f' (s a)) (f' (t a)). (** interpretation of an expression into an arbitray structure, given an assignation [f] of the variables (and a interpretation function [f'] for the types) *) Fixpoint eval n m (x: expr n m): X (f' n) (f' m) := match x with | e_zer _ _ => 0 | e_top _ _ => top | e_one _ => 1 | e_pls _ _ x y => eval x + eval y | e_cap _ _ x y => eval x ∩ eval y | e_neg _ _ x => ! eval x (* | e_not _ x => eval x ^~ *) | e_dot _ _ _ x y => eval x ⋅ eval y | e_itr _ x => (eval x)^+ | e_str _ x => (eval x)^* | e_cnv _ _ x => (eval x)° | e_ldv _ _ _ x y => eval x -o eval y | e_rdv _ _ _ x y => eval y o- eval x | e_var a => f a end. End e. Section l. Variable l: level. (** * (In)equality of syntactic expressions. Like in [lsyntax], we use an impredicative encoding to define (in)equality in the free syntactic model, and we parametrise all definition with the level [l] at which we want to interpret the given expressions. *) Definition e_leq n m (x y: expr n m) := forall X (L:laws l X) f' (f: forall a, X (f' (s a)) (f' (t a))), eval f x ≦ eval f y. Definition e_weq n m (x y: expr n m) := forall X (L:laws l X) f' (f: forall a, X (f' (s a)) (f' (t a))), eval f x ≡ eval f y. (** by packing syntactic expressions and the above predicates into a canonical structure for flat operations, and another one for the other operations, we get all notations for free *) Canonical Structure expr_lattice_ops n m := {| car := expr n m; leq := @e_leq n m; weq := @e_weq n m; cup := @e_pls n m; cap := @e_cap n m; neg := @e_neg n m; bot := @e_zer n m; top := @e_top n m |}. Canonical Structure expr_ops := {| ob := I; mor := expr_lattice_ops; dot := e_dot; one := e_one; itr := e_itr; str := e_str; cnv := e_cnv; ldv := e_ldv; rdv := e_rdv |}. (** we easily show that we get a model so that we immediately benefit from all lemmas about the various structures *) Global Instance expr_lattice_laws n m: lattice.laws l (expr_lattice_ops n m). Proof. constructor; try right. constructor. intros x X L u f. reflexivity. intros x y z H H' X L u f. transitivity (eval f y); auto. intros x y. split. intro H. split; intros X L u f. now apply weq_leq, H. now apply weq_geq, H. intros [H H'] X L u f. apply antisym; auto. intros Hl x y z. split. intro H. split; intros X L u f; specialize (H X L u f); simpl in H; hlattice. intros [H H'] X L u f. simpl. apply cup_spec; auto. intros Hl x y z. split. intro H. split; intros X L u f; specialize (H X L u f); simpl in H; hlattice. intros [H H'] X L u f. simpl. apply cap_spec; auto. intros x X L u f. apply leq_bx. intros x X L u f. apply leq_xt. intros Hl x y z X L u f. apply cupcap_. intros Hl x X L u f. apply capneg. intros Hl x X L u f. apply cupneg. Qed. Global Instance expr_laws: laws l expr_ops. Proof. constructor; repeat right; repeat intro; simpl. apply expr_lattice_laws. apply dotA. apply dot1x. apply dotx1. apply dot_leq; auto. now rewrite dotplsx. now rewrite dotxpls. now rewrite dot0x. now rewrite dotx0. apply cnvdot_. apply cnv_invol. apply cnv_leq. now refine (H0 _ _ _ _). apply cnv_ext. apply str_refl. apply str_cons. apply str_ind_l. now refine (H0 _ _ _ _). apply str_ind_r. now refine (H0 _ _ _ _). apply itr_str_l. apply capdotx. split; intros E X L f' f; intros; simpl; apply ldv_spec, (E X L f' f). split; intros E X L f' f; intros; simpl; apply rdv_spec, (E X L f' f). Qed. End l. (** * Testing for particular constants *) Definition is_zer n m (x: expr n m) := match x with e_zer _ _ => true | _ => false end. Definition is_top n m (x: expr n m) := match x with e_top _ _ => true | _ => false end. Inductive is_case n m (k x: expr n m): expr n m -> bool -> Prop := | is_case_true: x=k -> is_case k x k true | is_case_false: x<>k -> is_case k x x false. Lemma is_zer_spec n m (x: expr n m): is_case (e_zer n m) x x (is_zer x). Proof. destruct x; constructor; discriminate || reflexivity. Qed. Lemma is_top_spec n m (x: expr n m): is_case (e_top n m) x x (is_top x). Proof. destruct x; constructor; discriminate || reflexivity. Qed. (** casting the type of an expression *) Definition cast n m n' m' (Hn: n=n') (Hm: m=m') (x: expr n m): expr n' m' := eq_rect n (fun n => expr n m') (eq_rect m (expr n) x _ Hm) _ Hn. End s. Arguments e_var [A s t] a. Arguments e_one [A s t] n. Arguments e_zer [A s t] n m. Arguments e_top [A s t] n m. Declare Scope ast_scope. Bind Scope ast_scope with expr. Delimit Scope ast_scope with ast. (** additional notations, to specify explicitly at which level expressions are considered, or to work directly with the bare constructors (by opposition with the encapsulated ones, through monoid.ops) *) Notation expr_ l s t n m := (expr_ops s t l n m). Notation "x <==_[ l ] y" := (@leq (expr_ops _ _ l _ _) x y) (at level 79): ra_scope. Notation "x ==_[ l ] y" := (@weq (expr_ops _ _ l _ _) x y) (at level 79): ra_scope. Infix "+" := e_pls: ast_scope. Infix "∩" := e_cap: ast_scope. Infix "⋅" := e_dot: ast_scope. Notation "1" := (e_one _): ast_scope. Notation "0" := (e_zer _ _): ast_scope. Notation top := (e_top _ _). Notation "x ^+" := (e_itr x): ast_scope. Notation "x °" := (e_cnv x): ast_scope. Notation "x ^*" := (e_str x): ast_scope. Notation "! x" := (e_neg x): ast_scope. Notation "x -o y" := (e_ldv x y) (right associativity, at level 60): ast_scope. Notation "y o- x" := (e_rdv x y) (left associativity, at level 61): ast_scope. (** * weakening (in)equations *) (** any equation holding at some level holds at all higher levels *) Lemma e_leq_weaken {h k} {Hl: h ≪ k} A s t n m (x y: @expr A s t n m): x <==_[h] y -> x <==_[k] y. Proof. intros H X L f' f. eapply @H, lower_laws. Qed. Lemma e_weq_weaken {h k} {Hl: h ≪ k} A s t n m (x y: @expr A s t n m): x ==_[h] y -> x ==_[k] y. Proof. intros H X L f' f. eapply @H, lower_laws. Qed. (** * comparing expressions syntactically *) Section expr_cmp. Notation I := positive. Context {A : cmpType}. Variables s t: A -> I. Notation expr := (expr s t). (** we need to generalise the comparison function to expressions of distinct types because of Coq's dependent types *) Fixpoint expr_compare n m (x: expr n m) p q (y: expr p q) := match x,y with | e_zer _ _, e_zer _ _ | e_top _ _, e_top _ _ | e_one _, e_one _ => Eq | e_var a, e_var b => cmp a b | e_pls _ _ x x', e_pls _ _ y y' | e_cap _ _ x x', e_cap _ _ y y' => lex (expr_compare x y) (expr_compare x' y') | e_dot _ u _ x x', e_dot _ v _ y y' | e_ldv u _ _ x x', e_ldv v _ _ y y' | e_rdv u _ _ x x', e_rdv v _ _ y y' => lex (cmp u v) (lex (expr_compare x y) (expr_compare x' y')) | e_neg _ _ x, e_neg _ _ y | e_itr _ x, e_itr _ y | e_str _ x, e_str _ y | e_cnv _ _ x, e_cnv _ _ y => expr_compare x y | e_zer _ _, _ => Lt | _, e_zer _ _ => Gt | e_one _, _ => Lt | _, e_one _ => Gt | e_top _ _, _ => Lt | _, e_top _ _ => Gt | e_var _, _ => Lt | _, e_var _ => Gt | e_itr _ _, _ => Lt | _, e_itr _ _ => Gt | e_str _ _, _ => Lt | _, e_str _ _ => Gt | e_cnv _ _ _, _ => Lt | _, e_cnv _ _ _ => Gt | e_ldv _ _ _ _ _, _ => Lt | _, e_ldv _ _ _ _ _ => Gt | e_rdv _ _ _ _ _, _ => Lt | _, e_rdv _ _ _ _ _ => Gt | e_pls _ _ _ _, _ => Lt | _, e_pls _ _ _ _ => Gt | e_cap _ _ _ _, _ => Lt | _, e_cap _ _ _ _ => Gt | e_neg _ _ _, _ => Lt | _, e_neg _ _ _ => Gt end. (** since [A] is decidable, [cast] provably acts as an identity *) Lemma cast_eq n m (x: expr n m) (H: n=n) (H': m=m): cast H H' x = x. Proof. unfold cast. now rewrite 2 cmp_eq_rect_eq. Qed. (** auxiliary lemma for [expr_compare_spec] below, using dependent equality *) Lemma expr_compare_eq_dep n m (x: expr n m): forall p q (y: expr p q), expr_compare x y = Eq -> n=p -> m=q -> eq_dep (I*I) (fun x => expr (fst x) (snd x)) (n,m) x (p,q) y. Proof. induction x; intros ? ? z C; destruct z; try discriminate C; try (intros <- <- || intros <- _); try reflexivity; simpl expr_compare in C. apply compare_lex_eq in C as [C1 C2]. apply IHx1, cmp_eq_dep_eq in C1; trivial. apply IHx2, cmp_eq_dep_eq in C2; trivial. subst. reflexivity. apply compare_lex_eq in C as [C1 C2]. apply IHx1, cmp_eq_dep_eq in C1; trivial. apply IHx2, cmp_eq_dep_eq in C2; trivial. subst. reflexivity. apply IHx, cmp_eq_dep_eq in C; trivial. subst. reflexivity. apply compare_lex_eq in C as [C C']. apply cmp_eq in C. subst. apply compare_lex_eq in C' as [C1 C2]. apply IHx1, cmp_eq_dep_eq in C1; trivial. apply IHx2, cmp_eq_dep_eq in C2; trivial. subst. reflexivity. apply IHx, cmp_eq_dep_eq in C; trivial. subst. reflexivity. apply IHx, cmp_eq_dep_eq in C; trivial. subst. reflexivity. apply IHx, cmp_eq_dep_eq in C; trivial. subst. reflexivity. apply compare_lex_eq in C as [C C']. apply cmp_eq in C. subst. apply compare_lex_eq in C' as [C1 C2]. apply IHx1, cmp_eq_dep_eq in C1; trivial. apply IHx2, cmp_eq_dep_eq in C2; trivial. subst. reflexivity. apply compare_lex_eq in C as [C C']. apply cmp_eq in C. subst. apply compare_lex_eq in C' as [C1 C2]. apply IHx1, cmp_eq_dep_eq in C1; trivial. apply IHx2, cmp_eq_dep_eq in C2; trivial. subst. reflexivity. apply cmp_eq in C. subst. reflexivity. Qed. Lemma expr_compare_eq n m (x y: expr n m): x=y -> expr_compare x y = Eq. Proof. intros <-. induction x; simpl expr_compare; trivial; rewrite ?compare_lex_eq, ?cmp_refl; tauto. Qed. (** final specification of the comparison function *) Lemma expr_compare_spec n m (x y: expr n m): compare_spec (x=y) (expr_compare x y). Proof. generalize (fun H => expr_compare_eq_dep x y H eq_refl eq_refl) as H. generalize (@expr_compare_eq _ _ x y) as H'. case (expr_compare x y); intros H' H; constructor. now apply cmp_eq_dep_eq in H. intro E. now discriminate H'. intro E. now discriminate H'. Qed. (** packaging as a [cmpType] *) Definition expr_compare_ n m (x y: expr n m) := expr_compare x y. Canonical Structure cmp_expr n m := mk_simple_cmp (@expr_compare_ n m) (@expr_compare_spec n m). End expr_cmp. (** * Packages for typed reification *) (** according to the interpretation function [eval], (typed) reification has to provide a set [A] for indexing variables, and four maps: - [f': I -> ob X], to interpret types as written in the expressions ([I=positive]) - [s] and [t]: A -> I, to specify the source and target types of each variable - [f: forall a: A, expr s t (s a) (t a)], to interpret each variable [A] is also fixed to be [positive], so that we can use simple and efficient positive maps, but the fact that [f] has a dependent type is not convenient. This is why we introduce the additionnal layer, to ease the definition of such maps: thanks to the definitions below, it suffices to provide the map [f'], and a map [f: A -> Pack X f']. *) Record Pack (X: ops) (f': positive -> ob X) := pack { src: positive; tgt: positive; val: X (f' src) (f' tgt) }. Definition packed_eval (X: ops) (f': positive -> ob X) (f: positive -> Pack X f') := eval (fun a => val (f a)) : forall n m, expr _ _ n m -> X (f' n) (f' m) . (** these projections are used to build the reified expressions *) Definition src_ (X: ops) (f': positive -> ob X) (f: positive -> Pack X f') a := src (f a). Definition tgt_ (X: ops) (f': positive -> ob X) (f: positive -> Pack X f') a := tgt (f a). (** loading ML reification module *) Declare ML Module "coq-relation-algebra.reification". relation-algebra-v.1.7.9/theories/traces.v000066400000000000000000000536711440504774100205200ustar00rootroot00000000000000(** * traces: the model of finite traces *) (** A trace is a word in where each letter is preceded and followed by a "state". Two traces compose iff the first one ends in the state where the second one starts. The traces model just consists in considering sets of traces. It encompasses the model of languages by taking the set of states to be a singleton set. When the set of states are the atoms of a Boolean algebra, we get so called "guarded strings", w.r.t. which KAT is complete. We define both the flat model of untyped traces, and a typed version of it, where we restrict to "well-typed" traces. Indeed, we shall prove KAT completeness for typed models, and then reduce the problem to untyped traces for computations. *) Require Export prop. Require Import monoid positives comparisons. Set Implicit Arguments. Section l. Variable State: Type. Notation Sigma := positive. (** * Untyped traces *) (** a trace alternates between letters and states; it starts and it ends with a state *) Inductive trace := | tnil (a: State) | tcons (a: State) (i: Sigma) (w: trace). (** starting/ending state of a trace *) Definition thead w := match w with tnil a | tcons a _ _ => a end. Fixpoint ttail w := match w with tnil a => a | tcons _ _ w => ttail w end. (** predicate for trace concatenation: [tapp u v w] holds iff traces [u] and [v] can be concatenated into [w] *) Inductive tapp: trace -> trace -> trace -> Prop := | tapp_nil_nil: forall a, tapp (tnil a) (tnil a) (tnil a) | tapp_nil_cons: forall a i u, tapp (tnil a) (tcons a i u) (tcons a i u) | tapp_cons_x: forall a i u v w, tapp u v w -> tapp (tcons a i u) v (tcons a i w). (** appending a letter and a state to a trace (not used) *) Fixpoint tsnoc u i b := match u with | tnil a => tcons a i (tnil b) | tcons a j u => tcons a j (tsnoc u i b) end. (** reversing a trace (not used) *) Fixpoint rev u := match u with | tnil a => tnil a | tcons a j u => tsnoc (rev u) j a end. (** we consider sets of traces *) Definition traces := trace -> Prop. (** ** Untyped traces as a lattice *) (** lattice operations and laws are obtained for free, by pointwise lifting of the [Prop] lattice *) Canonical Structure traces_lattice_ops := lattice.mk_ops traces leq weq cup cap neg bot top. Global Instance traces_lattice_laws: lattice.laws (BDL+STR+DIV) traces_lattice_ops := lower_lattice_laws (H:=pw_laws _). (** * Untyped traces a residuated Kleene lattice *) (** singleton type for the objects of this flat structure *) CoInductive traces_unit := traces_tt. Notation tt := traces_tt. (** ** traces operations *) (** concatenation *) Definition traces_dot (n m p: traces_unit) (x y: traces): traces := fun w => exists2 u, x u & exists2 v, y v & tapp u v w. (** left and right residuals *) Definition traces_ldv (n m p: traces_unit) (x y: traces): traces := fun w => forall u uw, x u -> tapp u w uw -> y uw. Definition traces_rdv (n m p: traces_unit) (x y: traces): traces := fun w => forall u wu, x u -> tapp w u wu -> y wu. (** unit: the set of "empty" traces, those consisting of just one state *) Definition traces_one (n: traces_unit): traces := fun w => match w with tnil _ => True | _ => False end. (** reversed traces *) Definition traces_cnv (n m: traces_unit) (x: traces): traces := fun w => x (rev w). (** finite iterations (with a slight generalisation: [y⋅x^n]) *) Fixpoint traces_iter (y x: traces) i := match i with O => y | S i => traces_dot tt tt tt x (traces_iter y x i) end. (** strict iteration: union of finite iterations, starting with [x] *) Definition traces_itr (n: traces_unit) (x: traces): traces := fun w => exists i, traces_iter x x i w. (** Kleene star: union of finite iterations, starting with [1] *) Definition traces_str (n: traces_unit) (x: traces): traces := fun w => exists i, traces_iter (traces_one n) x i w. (** packing all operations in a canonical structure *) Canonical Structure traces_monoid_ops := monoid.mk_ops _ (fun _ _ => traces_lattice_ops) traces_dot traces_one traces_itr traces_str traces_cnv traces_ldv traces_rdv. (** shorthand for [traces], when a morphism is expected *) Notation traces' := (traces_monoid_ops tt tt). (** ** traces form a residuated Kleene lattice *) (** associativity of single traces concatenations *) Lemma tapp_ass u v w uv uvw: tapp u v uv -> tapp uv w uvw -> exists vw, tapp v w vw /\ tapp u vw uvw. Proof. intro H. revert uvw. induction H. inversion_clear 1; eexists; split; constructor. inversion_clear 1. eexists; split; constructor; assumption. inversion_clear 1. edestruct IHtapp as (?&?&?). eassumption. eexists; split. eassumption. constructor. assumption. Qed. Lemma tapp_ass' u v w vw uvw: tapp v w vw -> tapp u vw uvw -> exists uv, tapp u v uv /\ tapp uv w uvw. Proof. intros H' H. induction H. inversion_clear H'; eexists; split; constructor. inversion_clear H'. eexists; split; constructor. eexists; split; constructor; eassumption. destruct (IHtapp H') as (?&?&?). eexists (tcons _ _ _). split; constructor; eassumption. Qed. (** administrative lemmas (to be reworked) *) (* TODO: rework these lemmas *) Lemma tapp_bounds u v w: tapp u v w -> ttail u = thead v /\ thead w = thead u /\ ttail w = ttail v. Proof. induction 1; simpl; intuition. Qed. Lemma tapp_tail_head u v w: tapp u v w -> ttail u = thead v. Proof. now induction 1. Qed. Lemma tapp_head u v w: tapp u v w -> thead u = thead w. Proof. now destruct 1. Qed. Lemma tapp_tail u v w: tapp u v w -> ttail v = ttail w. Proof. now induction 1. Qed. Lemma tapp_nil_x u: tapp (tnil (thead u)) u u. Proof. destruct u; constructor. Qed. Lemma tapp_x_nil u: tapp u (tnil (ttail u)) u. Proof. induction u; constructor. assumption. Qed. Lemma tapp_cat u v: ttail u = thead v -> exists w, tapp u v w. Proof. induction u; simpl; intros Hu. rewrite Hu. eexists. apply tapp_nil_x. destruct IHu as [w ?]; trivial. eexists. constructor. eassumption. Qed. Lemma tapp_nil_x_eq a u v: tapp (tnil a) u v -> a = thead u /\ v = u. Proof. now inversion_clear 1. Qed. Lemma tapp_x_nil_eq a u v: tapp u (tnil a) v -> a = ttail u /\ v = u. Proof. revert v; induction u. inversion_clear 1. now idtac. inversion_clear 1. edestruct IHu as (?&?). eassumption. subst. now idtac. Qed. (* Lemma rev_tsnoc u i a: rev (tsnoc u i a) = tcons a i (rev u). Proof. induction u; simpl. reflexivity. now rewrite IHu. Qed. Lemma rev_invol u: rev (rev u) = u. Proof. induction u; simpl. reflexivity. now rewrite rev_tsnoc, IHu. Qed. *) (** auxiliary lemmas, to establish that traces form a residuated Kleene lattice *) Lemma traces_dotA n m p q x y z: traces_dot n m q x (traces_dot m p q y z) ≡ traces_dot n p q (traces_dot n m p x y) z. Proof. intro w. simpl. split. intros [? ? [? [? ? [? ? E]] E']]. destruct (tapp_ass' E E') as (?&?&?). repeat eexists; eassumption. intros [? [? ? [? ? E]] [? ? E']]. destruct (tapp_ass E E') as (?&?&?). repeat eexists; eassumption. Qed. Lemma traces_dot_leq n m p: Proper (leq ==> leq ==> leq) (@traces_dot n m p). Proof. intros x y H x' y' H' w [u Hu [v Hv Hw]]. exists u. apply H, Hu. exists v. apply H', Hv. assumption. Qed. Lemma traces_dotx1 x: traces_dot tt tt tt x (traces_one tt) ≡ x. Proof. intro w. split. intros [u Hu [[|] Hv E]]. 2: elim Hv. now apply tapp_x_nil_eq in E as [? ->]. intro Hw. exists w. assumption. exists (tnil (ttail w)). reflexivity. apply tapp_x_nil. Qed. Lemma traces_dot1x x: traces_dot tt tt tt (traces_one tt) x ≡ x. Proof. intro w. split. intros [u Hu [v Hv E]]. destruct u as [?|]. 2: elim Hu. inversion E; subst; assumption. intro Hw. exists (tnil (thead w)). reflexivity. exists w. assumption. apply tapp_nil_x. Qed. Lemma traces_iter_S i x: traces_iter x x i ≡ traces_iter (traces_one tt) x (S i). Proof. induction i; simpl traces_iter. symmetry. apply traces_dotx1. now apply (op_leq_weq_2 (Hf:=@traces_dot_leq _ _ _)). Qed. (** traces form a residuated Kleene lattice (we do not have an allegory, since the converse operation does not satisfy the law [x ≦x⋅x°⋅x]) *) (* TODO: include a weak converse? *) Global Instance traces_monoid_laws: monoid.laws (BDL+STR+DIV) traces_monoid_ops. Proof. constructor; try (intro; discriminate); try now left. intros. apply traces_lattice_laws. exact traces_dotA. intros. apply traces_dot1x. right. intros. apply traces_dotx1. intros. intros w Hw. now exists O. intros. intros w [u Hu [v [i Hv] Hw]]. exists (S i). repeat eexists; eassumption. intros _ ? ? x z H w [u [i Hu] [v Hv Hw]]. revert u v w Hu Hv Hw; induction i; intros u v w Hu Hv Hw. destruct u. 2: elim Hu. now apply tapp_nil_x_eq in Hw as [_ <-]. destruct Hu as [u1 Hu1 [u2 Hu2 Hu]]. destruct (tapp_ass Hu Hw) as (?&?&?). apply H. repeat eexists. eassumption. eapply IHi; eauto. eassumption. intros _ ? x w. split. intros [i H]. apply traces_iter_S in H as [? ? [? ? ?]]. repeat eexists; eauto. intros [? ? [? [i H] ?]]. exists i. apply traces_iter_S. repeat eexists; eauto. intros _ n m p x y z. split. intros Hz w [u Hu [v Hv E]]. eapply Hz; eauto. intros Hy v Hv u uv Hu Huv. apply Hy. repeat eexists; eauto. intros _ n m p x y z. split. intros Hz w [u Hu [v Hv E]]. eapply Hz; eauto. intros Hy v Hv u uv Hu Huv. apply Hy. repeat eexists; eauto. Qed. (** empty trace property for concatenated languages of traces *) Lemma traces_dot_nil (L L': traces') a: (L⋅L')%ra (tnil a) <-> L (tnil a) /\ L' (tnil a). Proof. split. intros [h H [k K E]]. inversion E. subst. clear E. now split. intros [H H']. repeat eexists; eauto using tapp_nil_nil. Qed. (** ** untyped traces derivatives *) Definition traces_deriv a i (L: traces'): traces' := fun w => L (tcons a i w). Lemma traces_deriv_0 a i: traces_deriv a i 0 ≡ 0. Proof. firstorder. Qed. Lemma traces_deriv_1 a i: traces_deriv a i 1 ≡ 0. Proof. compute. intuition discriminate. Qed. Lemma traces_deriv_pls a i (H K: traces'): traces_deriv a i (H+K) ≡ traces_deriv a i H + traces_deriv a i K. Proof. intro. now apply cup_weq. Qed. Lemma traces_deriv_dot_1 a i (H K: traces'): H (tnil a) -> traces_deriv a i (H⋅K) ≡ traces_deriv a i H ⋅ K + traces_deriv a i K. Proof. intros Hnil w; simpl; unfold traces_deriv, traces_dot, pw2. split. intros [[b|b j u] Hu [v Kv E]]. right. now apply tapp_nil_x_eq in E as [-> ->]. inversion E. subst. clear E. left. repeat eexists; eauto. intros [[u Hu [v Kv E]]|Ka]; repeat eexists; eauto using tapp_cons_x, tapp_nil_cons. Qed. Lemma traces_deriv_dot_2 a i (H K: traces'): ~ (H (tnil a)) -> traces_deriv a i (H⋅K) ≡ traces_deriv a i H ⋅ K. Proof. intros Hnil w; simpl; unfold traces_deriv, traces_dot, pw2. split. intros [[b|b j u] Hu [v Kv E]]. apply tapp_nil_x_eq in E as [-> <-]. now elim Hnil. inversion E. subst. clear E. repeat eexists; eauto. intros [u Hu [v Kv E]]; repeat eexists; eauto using tapp_cons_x. Qed. Lemma traces_deriv_itr a i (H: traces'): traces_deriv a i (H^+) ≡ traces_deriv a i H ⋅ H^*. Proof. rewrite str_itr. apply antisym. - intro w. intros [n Hn]. induction n in a, i, w, Hn; simpl in Hn. repeat eexists; eauto. 2: apply tapp_x_nil. now left. destruct Hn as [[b|b j v] Hv [u Hu Hw]]. apply tapp_nil_x_eq in Hw as [-> <-]. apply IHn, Hu. inversion Hw. subst. clear Hw. repeat eexists; eauto. right. eexists; eauto. - intros w [u Hu [v [Hv|[n Hv]] Hw]]. destruct v as [b|]. 2: elim Hv. apply tapp_x_nil_eq in Hw as [-> <-]. now exists O. exists (S n). repeat eexists; eauto using tapp_cons_x. Qed. (** ** particular elements *) (** sub-identities *) Definition tinj (p: State -> Prop): traces := fun w => match w with tnil a => p a | _ => False end. (** all traces consisting of the single letter [i] *) Definition tsingle (i: Sigma): traces := fun w => match w with tcons _ j (tnil _) => i=j | _ => False end. Lemma traces_deriv_inj a i p: traces_deriv a i (tinj p) ≡ 0. Proof. now intro. Qed. Lemma traces_deriv_single a i j: traces_deriv a i (tsingle j) ≡ ofbool (eqb i j). Proof. intros [b|???]. unfold traces_deriv. simpl. case eqb_spec; simpl; intuition; discriminate. unfold traces_deriv. simpl. case eqb_spec; reflexivity. Qed. (** * Typed traces *) (** we assume two functions [src] (source) and [tgt] (target), to type letters *) Variables src tgt: Sigma -> positive. (** a trace is well typed if consecutive letters can be composed according to their types (states are completely ignored) *) Inductive typed: positive -> positive -> trace -> Prop := | ttnil: forall n a, typed n n (tnil a) | ttcons: forall a i m w, typed (tgt i) m w -> typed (src i) m (tcons a i w). (** a set of traces is well typed if all its elements are *) Definition typed' n m (L: trace -> Prop) := forall u, L u -> typed n m u. (** administrative lemmas for constructing well-typed sets of traces *) Lemma typed'_bot n m: typed' n m bot. Proof. intros ? []. Qed. Lemma typed'_cup n m (x y: traces): typed' n m x -> typed' n m y -> typed' n m (x ⊔ y). Proof. intros ? ? ? [|]; eauto. Qed. Lemma typed'_cap n m (x y: traces): typed' n m x -> typed' n m y -> typed' n m (x ⊓ y). Proof. intros ? ? ? []; eauto. Qed. Lemma tapp_typed (u v w: trace): tapp u v w -> forall n m p, typed n m u -> typed m p v -> typed n p w. Proof. induction 1; intros n m p Hu Hv. now inversion_clear Hu. now inversion_clear Hu. inversion_clear Hu. constructor. eapply IHtapp; eassumption. Qed. Lemma typed'_dot n m p (x y: traces'): typed' n m x -> typed' m p y -> typed' n p (x⋅y). Proof. intros Hx Hy w [u Hu [v Hv H]]. eapply tapp_typed; eauto. Qed. Lemma typed'_one n: typed' n n (one tt). Proof. intros [|] []. constructor. Qed. Lemma typed'_inj n p: typed' n n (tinj p). Proof. intros [|? ? ?]. constructor. intros []. Qed. Lemma typed'_iter n (x y: traces') i: typed' n n x -> typed' n n y -> typed' n n (traces_iter y x i). Proof. intros Hx Hy. induction i. assumption. eapply typed'_dot; eassumption. Qed. Lemma typed'_itr n (x: traces'): typed' n n x -> typed' n n (x^+). Proof. intros Hx w [i Hw]. eapply typed'_iter; eassumption. Qed. Lemma typed'_str n (x: traces'): typed' n n x -> typed' n n (x^*). Proof. intros Hx w [i Hw]. eapply typed'_iter in Hw; eauto using typed'_one. Qed. Lemma typed'_single i: typed' (src i) (tgt i) (tsingle i). Proof. red. intros [?|? ? [|]] []. do 2 constructor. Qed. (** ** packing typed traces as a residuated Kleene lattice *) (** we encapsulate traces into well-typed ones using a sig-type *) Definition ttraces n m := sig (typed' n m). (** restriction of an arbitrary set of traces to those that are well-typed *) Program Definition restrict n m (x: traces): ttraces n m := fun w => typed n m w /\ x w. Next Obligation. now intros w [? _]. Qed. Lemma is_typed n m (x: ttraces n m) w: proj1_sig x w -> typed n m w. Proof. destruct x as [x Hx]. apply Hx. Qed. Section l. Variables n m: positive. Implicit Types x y: ttraces n m. (** we directly embed all operations from untyped traces, except for top elements and residuals, which have to be restricted. (So that typed traces are not entirely a sub-algebra of traces.) *) (** *** lattice structure *) Definition ttraces_leq x y := proj1_sig x ≦ proj1_sig y. Definition ttraces_weq x y := proj1_sig x ≡ proj1_sig y. Program Definition ttraces_cup x y: ttraces n m := proj1_sig x ⊔ proj1_sig y. Next Obligation. apply typed'_cup; apply proj2_sig. Qed. Program Definition ttraces_cap x y: ttraces n m := proj1_sig x ⊓ proj1_sig y. Next Obligation. apply typed'_cap; apply proj2_sig. Qed. Program Definition ttraces_neg x: ttraces n m := restrict n m (! proj1_sig x). Program Definition ttraces_bot: ttraces n m := fun _ => False. Next Obligation. apply typed'_bot. Qed. Program Definition ttraces_top: ttraces n m := typed n m. Next Obligation. now intros ? ?. Qed. Canonical Structure ttraces_lattice_ops := lattice.mk_ops (ttraces n m) ttraces_leq ttraces_weq ttraces_cup ttraces_cap ttraces_neg ttraces_bot ttraces_top. Global Instance ttraces_lattice_laws: lattice.laws (BDL+STR+DIV) ttraces_lattice_ops. Proof. constructor; simpl; unfold ttraces_leq, ttraces_weq; intros. constructor. intro. apply pw_laws. intros x y z. apply pw_laws. apply weq_spec. apply cup_spec. apply cap_spec. right. intros ? ? []. right. intros x u. refine (is_typed _ _). apply cupcap_. discriminate. discriminate. Qed. End l. (** *** monoid structure *) Program Definition ttraces_dot n m p (x: ttraces n m) (y: ttraces m p): ttraces n p := traces_dot tt tt tt x y. Next Obligation. eapply typed'_dot; eapply proj2_sig. Qed. Program Definition ttraces_ldv n m p (x: ttraces n m) (y: ttraces n p): ttraces m p := restrict m p (traces_ldv tt tt tt x y). Program Definition ttraces_rdv n m p (x: ttraces m n) (y: ttraces p n): ttraces p m := restrict p m (traces_rdv tt tt tt x y). Program Definition ttraces_one n: ttraces n n := traces_one tt. Next Obligation. apply typed'_one. Qed. Program Definition ttraces_cnv n m (x: ttraces n m): ttraces m n := fun w => False. Next Obligation. intros ? []. Qed. Program Definition ttraces_iter n (y x: ttraces n n) i: ttraces n n := traces_iter y x i. Next Obligation. apply typed'_iter; apply proj2_sig. Qed. Program Definition ttraces_itr n (x: ttraces n n): ttraces n n := traces_itr tt x. Next Obligation. apply typed'_itr, proj2_sig. Qed. Program Definition ttraces_str n (x: ttraces n n): ttraces n n := traces_str tt x. Next Obligation. apply typed'_str, proj2_sig. Qed. Canonical Structure ttraces_monoid_ops := monoid.mk_ops _ ttraces_lattice_ops ttraces_dot ttraces_one ttraces_itr ttraces_str ttraces_cnv ttraces_ldv ttraces_rdv. Notation ttraces' n m := (ttraces_monoid_ops n m). Global Instance ttraces_monoid_laws: monoid.laws (BDL+STR+DIV) ttraces_monoid_ops. Proof. assert(H: monoid.laws (DL+STR) ttraces_monoid_ops). apply (laws_of_faithful_functor (f':=fun _ => tt) (f:=fun n m => @proj1_sig _ _: ttraces' n m -> traces)); trivial. constructor; try (discriminate || reflexivity). constructor; trivial; try discriminate; now intros ? ? ?. constructor; try (intro; discriminate); (try now left); intros. apply ttraces_lattice_laws. apply dotA. apply dot1x. right. apply dotx1. apply str_refl. apply str_cons. now apply str_ind_l. apply itr_str_l. simpl. setoid_rewrite <-(ldv_spec (n:=tt) (m:=tt) (p:=tt)). split. intros H' w Hw. apply H', Hw. intros H' w Hw. split. eapply is_typed, Hw. apply H', Hw. simpl. setoid_rewrite <-(rdv_spec (n:=tt) (m:=tt) (p:=tt)). split. intros H' w Hw. apply H', Hw. intros H' w Hw. split. eapply is_typed, Hw. apply H', Hw. Qed. (** ** particular elements *) Program Definition ttinj n p: ttraces n n := tinj p. Next Obligation. apply typed'_inj. Qed. Program Definition tatom n a: ttraces n n := eq (tnil a). Next Obligation. intros ? <-. constructor. Qed. Program Definition ttsingle (i: Sigma): ttraces (src i) (tgt i) := tsingle i. Next Obligation. apply typed'_single. Qed. Program Definition tsingle' a i b: ttraces (src i) (tgt i) := eq (tcons a i (tnil b)). Next Obligation. intros ? <-. do 2 constructor. Qed. Lemma atom_single_atom a i b: tatom _ a ⋅ ttsingle i ⋅ tatom _ b ≡ tsingle' a i b. Proof. intro x. split. - intros [uv [u <- [v Hv Huvw]] [w <- Huvw']]. destruct v as [|c j [d|]]; try elim Hv. destruct Hv. inversion Huvw. simpl. subst. setoid_rewrite <- (tapp_tail_head Huvw'). now apply tapp_x_nil_eq in Huvw' as [-> ->]. - intros <-. exists (tcons a i (tnil b)). eexists. reflexivity. exists (tcons a i (tnil b)). now idtac. constructor. eexists. reflexivity. constructor; constructor. Qed. (** ** properties of the embedding of untyped sets of traces into typed ones, through restriction *) Global Instance restrict_leq n m: Proper (leq ==> leq) (restrict n m). Proof. intros x y H w [? ?]. split. assumption. now apply H. Qed. Global Instance restrict_weq n m: Proper (weq ==> weq) (restrict n m) := op_leq_weq_1. Lemma restrict_0 n m: restrict n m (zer tt tt) ≡ 0. Proof. intros [|]; simpl; tauto. Qed. Lemma restrict_1 n: restrict n n (one tt) ≡ 1. Proof. intros [b|]; simpl. 2: tauto. intuition constructor. Qed. Lemma restrict_pls n m (x y: traces'): restrict n m (x + y) ≡ restrict n m x + restrict n m y. Proof. intros w; simpl; unfold pw2. tauto. Qed. Lemma restrict_dot n m p (x y: traces'): typed' n m x -> typed' m p y -> restrict n p (x ⋅ y) ≡ restrict n m x ⋅ restrict m p y. Proof. intros Hx Hy w; simpl. split. intros [Hw [u Hu [v Hv H]]]. repeat eexists; eauto. intros [u [Hu Hu'] [v [Hv Hv'] H]]. split. eapply tapp_typed; eassumption. repeat eexists; eassumption. Qed. Lemma restrict_iter n (x y: traces') i: typed' n n x -> typed' n n y -> restrict n n (traces_iter y x i) ≡ ttraces_iter (restrict n n y) (restrict n n x) i. Proof. intros Hx Hy. induction i. simpl. now unfold ttraces_weq. simpl. setoid_rewrite restrict_dot. now setoid_rewrite IHi. assumption. now apply typed'_iter. Qed. Lemma restrict_itr n (x: traces'): typed' n n x -> restrict n n (x^+) ≡ (restrict n n x)^+. Proof. intros Hx w. simpl. split. intros [H [i Hw]]. exists i. apply restrict_iter; trivial. now split. intros [i Hw]. split. eapply typed'_iter. 3: eassumption. now intros ? []. now intros ? []. exists i. eapply restrict_iter; eassumption. Qed. Lemma restrict_inj n p: restrict n n (tinj p) ≡ ttinj n p. Proof. intros [b|b i w]; simpl. 2: simpl; lattice. split. now intros [_ H]. intros ?. split. constructor. assumption. Qed. Lemma restrict_single i: restrict _ _ (tsingle i) ≡ ttsingle i. Proof. intros [a|a j [b|???]]; simpl; try lattice. split. now intros [_ H]. intros <-. repeat constructor. Qed. End l. Arguments tnil {State} a. Arguments tcons {State} a i w. Arguments tsingle {State} i _. Arguments tatom {State src tgt} n a. Arguments ttsingle {State src tgt} i. Arguments tsingle' {State src tgt} a i b. Notation traces' State := (traces_monoid_ops State traces_tt traces_tt). relation-algebra-v.1.7.9/theories/ugregex.v000066400000000000000000000200361440504774100206720ustar00rootroot00000000000000(** * ugregex: untyped generalised regular expressions *) (** we define here the syntax for untyped generalised regular expressions which we will actually use in computations *) Require Import kat lsyntax positives sups glang comparisons boolean. Set Implicit Arguments. Section l. Variable Pred: nat. Notation Sigma := positive. Notation Atom := (ord (pow2 Pred)). Notation uglang := (traces_monoid_ops Atom traces_tt traces_tt). (** * Syntax *) (** we declare strict iteration as primitive for efficiency reasons: Kleene star is derived from strict iteration in a linear way, while deriving strict iteration out of Kleene star requires duplication. *) Inductive ugregex := | u_var(i: Sigma) | u_prd(p: expr (ord Pred)) | u_pls(e f: ugregex) | u_dot(e f: ugregex) | u_itr(e: ugregex). (** [zer] and [one] are also derived operations *) Definition u_zer := u_prd e_bot. Definition u_one := u_prd e_top. Definition u_str e := u_pls u_one (u_itr e). (** * Language *) (** we define the untyped guarded string language of an expression algebraically (by induction on the expression) ; below we characterise it coalgebraically *) Fixpoint lang (e: ugregex): uglang := match e with | u_var a => tsingle a | u_prd p => tinj (fun i => eval (set.mem i) p) | u_pls e f => lang e + lang f | u_dot e f => lang e ⋅ lang f | u_itr e => (lang e)^+ end. (** we get a KA structure, by interpretation into languages *) Definition u_leq e f := lang e ≦ lang f. Definition u_weq e f := lang e ≡ lang f. Canonical Structure ugregex_lattice_ops := lattice.mk_ops _ u_leq u_weq u_pls u_pls id u_zer u_zer. CoInductive ugregex_unit := ugregex_tt. Canonical Structure ugregex_monoid_ops := monoid.mk_ops ugregex_unit _ (fun _ _ _ => u_dot) (fun _ => u_one) (fun _ => u_itr) (fun _ => u_str) (fun _ _ _ => u_zer) (fun _ _ _ _ _ => u_zer) (fun _ _ _ _ _ => u_zer). (* Canonical Structure ugregex_kat_ops := *) (* kat.mk_ops ugregex_monoid_ops (fun _ => lsyntax.expr_ops _ BL) (fun _ => u_prd). *) Notation tt := ugregex_tt. Notation ugregex' := (ugregex_monoid_ops tt tt). Global Instance ugregex_monoid_laws: monoid.laws BKA ugregex_monoid_ops. Proof. apply (laws_of_faithful_functor (f:=fun _ _: ugregex_unit => lang)). constructor; simpl ob. intros ? ?. constructor. now intros ? ? ?. now intros ? ? ?. now intros _ ? ?. discriminate. intros _ [a|]; simpl; intuition; discriminate. discriminate. discriminate. reflexivity. intros ? [a|]; simpl; intuition. reflexivity. intros _ ? x. rewrite str_itr. simpl (lang _). apply cup_weq. 2: reflexivity. intros [|]; simpl. intuition. tauto. discriminate. discriminate. discriminate. now intros ? ? ?. now intros ? ? ?. Qed. Global Instance ugregex_lattice_laws: lattice.laws BKA ugregex_lattice_ops. Proof. apply (@lattice_laws _ _ ugregex_monoid_laws tt tt). Qed. (** Note that [ugregex] actually comes with a KAT structure, but we do not need it *) (* Global Instance ugregex_kat_laws: kat.laws ugregex_kat_ops. *) (** folding expressions *) Ltac fold_ugregex := ra_fold ugregex_monoid_ops tt. (** * Coalgebraic characterisation of the language recognised by an expression *) (** epsilon (optimised since it's quite simple) *) Fixpoint epsilon_pred a (e: expr_ops (ord Pred) BL) := match e with | e_bot => false | e_top => true | e_var i => a i | e_cup e f => epsilon_pred a e ||| epsilon_pred a f | e_cap e f => epsilon_pred a e &&& epsilon_pred a f | e_neg e => negb (epsilon_pred a e) end. Fixpoint epsilon a (e: ugregex) := match e with | u_var _ => false | u_prd p => epsilon_pred a p | u_pls e f => epsilon a e ||| epsilon a f | u_dot e f => epsilon a e &&& epsilon a f | u_itr e => epsilon a e end. (** derivatives (specification, unoptimised) *) Fixpoint deriv a i (e: ugregex'): ugregex' := match e with | u_prd _ => 0 | u_var j => ofbool (eqb_pos i j) | u_pls e f => deriv a i e + deriv a i f | u_dot e f => deriv a i e ⋅ f + ofbool (epsilon (set.mem a) e) ⋅ deriv a i f | u_itr e => deriv a i e ⋅ (e: ugregex')^* end. (** corresponding coalgebraic notion of language *) Fixpoint lang' (e: ugregex') (w: trace Atom) := match w with | tnil a => is_true (epsilon (set.mem a) e) | tcons a i w => lang' (deriv a i e) w end. (** characterisation of [epsilon] through languages *) Lemma epsilon_iff_lang_nil a e: epsilon (set.mem a) e <-> (lang e) (tnil a). Proof. induction e; simpl. intuition discriminate. apply eq_bool_iff. simpl epsilon. induction p; simpl. reflexivity. reflexivity. rewrite <-Bool.orb_lazy_alt. congruence. rewrite <-Bool.andb_lazy_alt. congruence. congruence. reflexivity. setoid_rewrite Bool.orb_true_iff. now apply cup_weq. setoid_rewrite Bool.andb_true_iff. setoid_rewrite traces_dot_nil. now apply cap_weq. split. exists O. simpl. tauto. intros [i Hi]. rewrite IHe. clear IHe. induction i. assumption. apply IHi. setoid_rewrite traces_dot_nil in Hi. apply Hi. Qed. Lemma lang_0: lang u_zer ≡ 0. Proof. intros [?|???]; simpl; intuition; discriminate. Qed. Lemma lang_1: lang u_one ≡ 1. Proof. intros [a|???]; simpl. intuition. reflexivity. Qed. Lemma lang_ofbool b: lang (ofbool b: ugregex') ≡ ofbool b. Proof. case b. apply lang_1. apply lang_0. Qed. Global Instance lang_leq: Proper (leq ==> leq) lang. Proof. now intros ? ?. Qed. Global Instance lang_weq: Proper (weq ==> weq) lang. Proof. now intros ? ?. Qed. Lemma lang_sup J: lang (sup id J) ≡ sup lang J. Proof. apply f_sup_weq. apply lang_0. reflexivity. Qed. Lemma deriv_sup a i J: deriv a i (sup id J) = sup (deriv a i) J. Proof. apply f_sup_eq; now f_equal. Qed. (** characterisation of derivatives through languages *) Lemma deriv_traces a i e: lang (deriv a i e) ≡ traces_deriv a i (lang e). Proof. symmetry. induction e; simpl deriv. simpl lang. rewrite lang_ofbool. apply traces_deriv_single. rewrite lang_0. apply traces_deriv_inj. setoid_rewrite traces_deriv_pls. now apply cup_weq. generalize (epsilon_iff_lang_nil a e1). case epsilon; intro He1. fold_ugregex. setoid_rewrite dot1x. simpl lang. setoid_rewrite traces_deriv_dot_1. now rewrite IHe1, IHe2. now rewrite <-He1. fold_ugregex. setoid_rewrite dot0x. rewrite cupxb. simpl lang. setoid_rewrite traces_deriv_dot_2. now rewrite IHe1. rewrite <-He1. discriminate. simpl lang. setoid_rewrite traces_deriv_itr. now rewrite IHe, str_itr, <-inj_top. Qed. (** the two definitions of languages (algebraic and coalgebraic) coincide, by unicity of the coalgebra morphism from expressions to languages *) Theorem lang_lang' e: lang e ≡ lang' e. Proof. symmetry. intro w. revert e. induction w as [a|a i w IH]; simpl lang'; intro e. - apply epsilon_iff_lang_nil. - rewrite IH. apply deriv_traces. Qed. (** as a consequence, [lang'] preserves equality *) Corollary lang'_weq: Proper (weq ==> weq) lang'. Proof. intros ? ? H. apply lang_weq in H. now rewrite 2 lang_lang' in H. Qed. (** * Comparing expressions *) Fixpoint ugregex_compare (x y: ugregex) := match x,y with | u_prd a, u_prd b | u_var a, u_var b => cmp a b | u_pls x x', u_pls y y' | u_dot x x', u_dot y y' => lex (ugregex_compare x y) (ugregex_compare x' y') | u_itr x, u_itr y => ugregex_compare x y | u_var _, _ => Lt | _, u_var _ => Gt | u_prd _, _ => Lt | _, u_prd _ => Gt | u_itr _ , _ => Lt | _, u_itr _ => Gt | u_pls _ _ , _ => Lt | _, u_pls _ _ => Gt end. Lemma ugregex_compare_spec x y: compare_spec (x=y) (ugregex_compare x y). Proof. revert y; induction x; destruct y; try (constructor; congruence); simpl. case cmp_spec; constructor; congruence. case cmp_spec; constructor; congruence. eapply lex_spec; eauto. intuition congruence. eapply lex_spec; eauto. intuition congruence. case IHx; constructor; congruence. Qed. Canonical Structure ugregex_cmp := mk_simple_cmp _ ugregex_compare_spec. End l. relation-algebra-v.1.7.9/theories/ugregex_dec.v000066400000000000000000000340141440504774100215060ustar00rootroot00000000000000(** * ugregex_dec: simple decision procedure for untyped generalised regular expressions *) (** We implement a rather basic algorithm consisting in trying to build a bisimulation on-the-fly, using partial derivatives. We prove the correctness of this algorithm, but not completeness ("it merely let you sleep better" according to Krauss and Nipkow). This very simple algorithm seems to be sufficient for reasonable expressions; we plan to improve it to be able to handle larger ones. *) Require Import lset kat positives sums glang boolean comparisons powerfix. Require Export ugregex. Set Implicit Arguments. Section l. Variable Pred: nat. Notation Sigma := positive. Notation Atom := (ord (pow2 Pred)). Notation tt := ugregex_tt. Notation ugregex := (ugregex_monoid_ops Pred tt tt). Notation uglang := (glang_kat_ops Pred Sigma traces_tt traces_tt). Notation lang := (@lang Pred). Ltac fold_ugregex_type := change (@ugregex.ugregex Pred) with (@car ugregex) in *. Ltac fold_ugregex := ra_fold ugregex_monoid_ops tt; fold_ugregex_type. (** * Partial derivatives *) (** reversed product *) Notation tod e := (fun f => u_dot f e) (only parsing). (** [pderiv a i e] returns the set of partial derivatives of [e] along transition [(a,i)] (since we work with KAT regular expressions, labels are composed of an atom together with a letter) *) Fixpoint pderiv a i (e: ugregex): list ugregex := match e with | u_prd _ => [] | u_var _ j => if eqb_pos i j then [u_one _] else [] | u_pls e f => union (pderiv a i e) (pderiv a i f) | u_dot e f => if epsilon a e then union (map (tod f) (pderiv a i e)) (pderiv a i f) else map (tod f) (pderiv a i e) | u_itr e => map (tod (u_str e)) (pderiv a i e) end. (** [epsilon] was defined in [ugregex], we now to extend both notions to sets of expressions, homomorphically: *) Definition epsilon' a (l: list ugregex): bool := fold_right (fun e b => b ||| epsilon a e) false l. Definition pderiv' a i (l: list ugregex): list ugregex := fold_right (fun e => union (pderiv a i e)) [] l. (** specification of [epsilon'] *) Lemma epsilon'_eq a l: epsilon a (sup id l) ≡ epsilon' a l. Proof. induction l. reflexivity. simpl. rewrite <- IHl. unfold id. rewrite <-2Bool.orb_lazy_alt. apply Bool.orb_comm. Qed. (** correctness of partial derivatives *) Lemma deriv_eq a i e: deriv a i e ≡ sup id (pderiv (set.mem a) i e). Proof. induction e; simpl; fold_ugregex. case eqb_pos. 2: reflexivity. now rewrite sup_singleton. reflexivity. rewrite union_app, sup_app. now apply cup_weq. assert (H: deriv a i e1 ⋅ e2 ≡ sup id (map (tod e2) (pderiv (set.mem a) i e1))). rewrite sup_map. setoid_rewrite <-(dotsumx (X:=ugregex_monoid_ops _)). now apply dot_weq. case epsilon. rewrite union_app, sup_app. setoid_rewrite dot1x. now apply cup_weq. setoid_rewrite dot0x. now rewrite cupxb. rewrite sup_map. setoid_rewrite <-(dotsumx (X:=ugregex_monoid_ops _)). now apply dot_weq. Qed. Lemma deriv'_eq a i l: deriv a i (sup id l) ≡ sup id (pderiv' (set.mem a) i l). Proof. induction l. reflexivity. simpl (sup _ _). rewrite union_app, sup_app. apply cup_weq. apply deriv_eq. assumption. Qed. (** Kleene variables of an expression *) Fixpoint vars (e: ugregex): list Sigma := match e with | u_prd _ => [] | u_var _ i => [i] | u_pls e f | u_dot e f => union (vars e) (vars f) | u_itr e => vars e end. (** partial derivatives do not increase the set of Kleene variables *) Lemma deriv_vars a i (e: ugregex): \sup_(x\in pderiv a i e) vars x ≦ vars e. Proof. induction e; simpl pderiv; simpl vars. case eqb_pos; apply leq_bx. apply leq_bx. rewrite 2union_app, sup_app. now apply cup_leq. setoid_rewrite union_app at 2. assert (H: \sup_(x\in map (tod e2) (pderiv a i e1)) vars x ≦ vars e1 ++ vars e2). rewrite sup_map. simpl vars. setoid_rewrite union_app. rewrite supcup. apply cup_leq. assumption. now apply leq_supx. case epsilon. rewrite union_app, sup_app, H. hlattice. assumption. rewrite sup_map. simpl vars. setoid_rewrite union_app. rewrite supcup. apply leq_cupx. assumption. now apply leq_supx. Qed. Lemma deriv'_vars a i l: \sup_(x\in pderiv' a i l) vars x ≦ sup vars l. Proof. induction l. reflexivity. setoid_rewrite union_app. rewrite sup_app. apply cup_leq. apply deriv_vars. assumption. Qed. (** deriving an expression w.r.t. a letter it does not contain necessarily gives [0] *) Lemma deriv_out a i e I: vars e ≦ I -> ~In i I -> deriv a i e ≡ 0. Proof. intros He Hi. induction e; simpl deriv; simpl vars in He; fold_ugregex. case eqb_spec. 2: reflexivity. intros <-. apply Hi in He as []. now left. reflexivity. rewrite union_app in He. rewrite IHe1, IHe2 by (rewrite <-He; lattice). apply cupI. rewrite union_app in He. rewrite IHe1, IHe2 by (rewrite <-He; lattice). rewrite dot0x, dotx0. apply cupI. rewrite IHe by assumption. apply dot0x. Qed. (** we need binary relations on sets of expressions, we represent them as lists of pairs (this could easily be optimised) *) Definition rel_mem (p: list ugregex * list ugregex) := existsb (eqb p). Notation rel_insert p rel := (p::rel). Notation rel_empty := []. (* OPT *) (* Definition rel_mem := trees.mem (pair_compare (list_compare compare)). *) (* Definition rel_insert := trees.insert (pair_compare (list_compare compare)). *) (* Notation rel_empty := (@trees.L _) *) Lemma rel_mem_spec p rel: reflect (In p rel) (rel_mem p rel). Proof. induction rel. constructor. tauto. simpl rel_mem. case eqb_spec. intros <-. constructor. now left. case IHrel; constructor. now right. intros [?|?]; congruence. Qed. (** * Main loop for the on-the-fly bisimulation algorithm *) (** [epsilon'] and [deriv'] provide us with a (generalised) DFA whose states are sets of generalised expressions ([list ugregex]). We simply try compute bisimulations in this DFA. *) Section a. (** we assume a set of Kleene variable, and a set of atoms; the following algorithm tries to compute bisimulations w.r.t. those sets. *) Variable I: list positive. Variable A: list (ord Pred -> bool). Definition obind X Y (f: X -> option Y) (x: option X): option Y := match x with Some x => f x | _ => None end. Fixpoint ofold X Y (f: X -> Y -> option Y) (l: list X) (y: Y): option Y := match l with | [] => Some y | x::q => obind (f x) (ofold f q y) end. (** [loop_aux e f a todo] checks the accepting status of [e] and [f] along [a], - if a mismatch is found, we can stop (a counter example has bee found) - otherwise, it inserts all derivatives of the pair [(e,f)] along [{a}⋅I] into [todo] *) Definition loop_aux e f := fun a todo => if eqb_bool (epsilon' a e) (epsilon' a f) then Some (fold_right (fun i => cons (pderiv' a i e, pderiv' a i f)) todo I) else None. (** [ofold (loop_aux e f) A todo] does the same, for all [a\in A] *) (** [loop n rel todo] is the main loop of the algorithm: it tries to prove that all pairs in [todo] are bisimilar, assuming that those in [rel] are bisimilar. - if a pair of [todo] was already in [rel], it can be skipped; - otherwise, its accepting status is checked, all derivatives are inserted in [todo], and the pair is added to [rel] The number of iterations is bounded by [2^n], using the [powerfix] operator. *) Definition loop n := powerfix n (fun loop rel todo => match todo with | [] => Some true | (e,f)::todo => if rel_mem (e,f) rel then loop rel todo else match ofold (loop_aux e f) A todo with | Some todo => loop (rel_insert (e,f) rel) todo | None => Some false end end ) (fun _ _ => None). (** * Correctness of the main loop *) (** [prog] is a predicate on binary relations: [prog rel (rel++todo)] is the invariant of the main loop *) Definition prog R S := forall e f, In (e,f) R -> sup vars (e++f) ≦ I /\ forall a, In a A -> epsilon' a e = epsilon' a f /\ forall i, In i I -> In (pderiv' a i e, pderiv' a i f) S. Lemma prog_cup_x R R' S: prog R S -> prog R' S -> prog (R++R') S. Proof. intros H H' e f Hef. apply in_app_iff in Hef as [?|?]. now apply H. now apply H'. Qed. Lemma prog_x_leq R S S': prog R S -> S ≦ S' -> prog R S'. Proof. intros H H' e f Hef. apply H in Hef as [? Hef]. split. assumption. split. now apply Hef. intros. now apply H', Hef. Qed. Definition below_I todo := forall e f, In (e,f) todo -> sup vars (e++f) ≦ I. (** specification of the inner loop *) Lemma loop_aux_spec e f a todo todo': below_I ((e,f)::todo) -> loop_aux e f a todo = Some todo' -> epsilon' a e = epsilon' a f /\ todo ≦ todo' /\ below_I todo' /\ forall i, In i I -> In (pderiv' a i e, pderiv' a i f) todo'. Proof. unfold loop_aux. case eqb_bool_spec. 2: discriminate. intros Heps Hvars E. split. assumption. injection E. clear E Heps. revert todo'. induction I as [|i J IH]; simpl fold_right; intro todo'. intros <-. split. reflexivity. split. intros ? ? ?. apply Hvars; now right. intros _ []. intro E. destruct todo' as [|p todo']. discriminate. injection E. intros H <-. clear E. apply IH in H as [H1 [H2 H3]]. clear IH. split. fold_cons. rewrite <- H1. lattice. split. intros ? ? [E|H]. injection E; intros <- <-. rewrite sup_app, 2deriv'_vars, <-sup_app. apply Hvars. now left. now apply H2. intros b [<-|Hb]. now left. right. now apply H3. Qed. Lemma fold_loop_aux_spec e f todo: forall todo', below_I ((e,f)::todo) -> ofold (loop_aux e f) A todo = Some todo' -> todo ≦ todo' /\ below_I todo' /\ forall a, In a A -> epsilon' a e = epsilon' a f /\ forall i, In i I -> In (pderiv' a i e, pderiv' a i f) todo'. Proof. induction A as [|b B IH]; simpl ofold; intros todo'. intros Hvars H. injection H. intros <-. split. reflexivity. split. intros ? ? ?. apply Hvars. now right. intros _ []. unfold obind. fold_ugregex_type. case_eq (ofold (X:=ord Pred -> bool) (loop_aux e f) B todo). 2: discriminate. intros todo'' Htodo'' Hvars Htodo'. apply IH in Htodo'' as [Htodo''_leq [Hvars' Htodo'']]. 2: assumption. clear IH. apply loop_aux_spec in Htodo' as (Heps&Htodo'_leq&Hvars''&Htodo'). split. etransitivity; eassumption. split. assumption. intros a [<-|Ha]. now split. apply Htodo'' in Ha as [Haeps Ha]. split. assumption. intros. now apply Htodo'_leq, Ha. intros ? ? [E|?]. injection E; intros <- <-. apply Hvars; now left. now apply Hvars'. Qed. Lemma In_cons X (a: X) l: In a l -> [a]++l ≦ l. Proof. now intros ? ? [<-|?]. Qed. (** specification of the outer loop *) Lemma prog_loop n: forall rel todo, loop n rel todo = Some true -> prog rel (rel++todo) -> below_I todo -> exists rel', rel++todo ≦ rel' /\ prog rel' rel'. Proof. (* TODO: use powerfix_invariant *) unfold loop. rewrite powerfix_linearfix. generalize (pow2 n). clear n. intro n. induction n; intros rel todo Hloop Hrel Hvars. discriminate. simpl in Hloop. destruct todo as [|[e f] todo]. exists rel. split. now rewrite <- app_nil_end. now rewrite <-app_nil_end in Hrel. revert Hloop. case rel_mem_spec. intros Hef Hloop. apply IHn in Hloop as (rel'&H1&H2). eexists. split. 2: eassumption. rewrite <- H1. rewrite <-(In_cons Hef) at 2. fold_cons. lattice. eapply prog_x_leq. apply Hrel. rewrite <-(In_cons Hef) at 2. fold_cons. lattice. intros ? ? ?. apply Hvars. now right. intros _. fold_ugregex_type. case_eq (ofold (X:=ord Pred -> bool) (loop_aux e f) A todo). 2: discriminate. intros todo' Htodo' Hloop. apply fold_loop_aux_spec in Htodo' as [Htodo' [Hvars' Hef]]. 2: assumption. destruct (IHn _ _ Hloop) as (rel'&Hrel'&Hrel''). 2: assumption. clear - Hef Hvars Hvars' Hrel Htodo'. apply (@prog_cup_x [_]). eapply prog_x_leq. intros ? ? [E|[]]. injection E; intros <- <-; clear E. split. apply Hvars. now left. apply Hef. lattice. eapply prog_x_leq. apply Hrel. rewrite <- Htodo'. fold_cons. lattice. eexists. split. 2: eassumption. rewrite <-Hrel', <-Htodo'. fold_cons. lattice. Qed. End a. Existing Instance lang'_weq. (** correctness of the bisimulation proof method, at the abstract level *) Lemma prog_correct I l rel: (forall a, In (set.mem a) l) -> prog I l rel rel -> below_I I rel -> forall e f, In (e,f) rel -> sup lang e ≡ sup lang f. Proof. intros Hl Hrel Hvars e f Hef. rewrite <-2lang_sup, 2lang_lang'. intro w. revert e f Hef. induction w; simpl lang'; intros e f Hef. - apply Hrel in Hef as [_ Hef]. rewrite 2epsilon'_eq. destruct (Hef _ (Hl a)) as [-> _]. reflexivity. - destruct (fun H => In_dec H i I) as [Hi|Hi]. decide equality. etransitivity. apply lang'_weq. apply deriv'_eq. etransitivity. 2: apply lang'_weq; symmetry; apply deriv'_eq. apply IHw. apply Hrel. assumption. apply Hl. assumption. clear IHw. revert w. apply lang'_weq. rewrite 2deriv_sup. rewrite 2sup_b. reflexivity. intros f' Hf. eapply deriv_out. 2: eassumption. etransitivity. 2: apply Hvars. 2: apply Hef. apply leq_xsup. apply in_app_iff. now right. intros e' He. eapply deriv_out. 2: eassumption. etransitivity. 2: apply Hvars. 2: apply Hef. apply leq_xsup. apply in_app_iff. now left. Qed. (** * Final algorithm, correctness *) (** the final algorithm is obtained by callign the main loop with appropriate arguments *) Definition eqb_kat (e f: ugregex) := let atoms := map (@set.mem _) (seq _) in let vars := vars (e+f) in loop vars atoms 1000 rel_empty [([e],[f])%list]. (* stated as this, the algorithm is not complete: we would need to replace 1000 with the size of [e+f]... bzzz *) (** correctness of the algorithm *) Theorem eqb_kat_correct e f: eqb_kat e f = Some true -> e ≡ f. Proof. unfold eqb_kat. intro H. apply prog_loop in H as [rel [Hef Hrel]]. 2: intros _ _ []. 2: simpl vars; intros ? ? [E|[]]; injection E; intros <- <-; rewrite union_app, sup_app, 2sup_singleton; reflexivity. eapply prog_correct in Hrel. 2: intro; apply in_map, in_seq. 3: apply Hef; now left. rewrite 2sup_singleton in Hrel. assumption. intros ? ? ?. now apply Hrel. Qed. End l. relation-algebra-v.1.7.9/theories/untyping.v000066400000000000000000000631421440504774100211060ustar00rootroot00000000000000(** * untyping: untyping theorem for typed structures *) (** More precisely, on the syntactic (free) models of all structures below Kleene algebra with converse and bottom elements. For more details, see "Untyping Typed Algebras and Colouring Cyclic Linear Logic", Damien Pous, Logical Methods in Computer Science 8(2:13) (2012) *) Require Import kleene syntax. Set Implicit Arguments. (** * Cleaning terms *) Section clean. Context {Sigma: cmpType}. (* used with Sigma=letter in kat_completeness *) Variables (s t: Sigma -> positive). Notation expr := (expr s t). (** more aggressive hint for level constraint resolution *) Local Hint Extern 0 (_ ≪ _) => solve_lower || solve_lower': typeclass_instances. (** induction scheme for syntax.expr (in)equality: (since (in)equality in the syntactic model (syntax.expr) is defined by impredicative encoding, we need to provide the induction scheme by ourselves). The impredicative encoding gives it almost directly *) Definition expr_ind leq weq := mk_ops _ (fun n m => lattice.mk_ops (expr n m) (leq n m) (weq n m) (@e_pls _ _ _ _ _) (@e_cap _ _ _ _ _) (@e_neg _ _ _ _ _) (@e_zer _ _ _ _ _) (@e_top _ _ _ _ _) ) (@e_dot _ s t) (@e_one _ s t) (@e_itr _ s t) (@e_str _ s t) (@e_cnv _ s t) (@e_ldv _ s t) (@e_rdv _ s t). Lemma expr_ind_eval leq weq n m (e: expr n m): eval (X:=expr_ind leq weq) (f':=id) (@e_var _ _ _) e = e. Proof. induction e; simpl; f_equal; assumption. Qed. (** we actually provide a mutual induction principle for [leq] and [weq] *) Lemma expr_leq_weq_ind l leq weq (L: laws l (expr_ind leq weq)) n m (e f: expr n m): (e <==_[l] f -> leq n m e f) /\ (e ==_[l] f -> weq n m e f). Proof. split; intro H; rewrite <-(expr_ind_eval leq weq e), <-(expr_ind_eval leq weq f); apply (H _ L id). Qed. (** The following predicate specifies when an expression is considered as "clean": it does not contain any occurrence of [0] or residuals ([0] has to be factored out for the following proof to work, and residuals cannot be handled with the following methodology) *) Fixpoint is_clean n m (x: expr n m) := match x with | e_one _ | e_var _ => True | e_pls x y | e_dot x y => is_clean x /\ is_clean y | e_cnv x | e_itr x | e_str x => is_clean x | _ => False end. (** cleaning constructors: use annihilation laws to remove all occurrences of [0] (but the last one, if the expression reduces to [0]) *) Definition e_pls' n m (x y: expr n m) := (if is_zer x then y else if is_zer y then x else x+y)%ast. Definition e_dot' n m p (x: expr n m) (y: expr m p) := (if is_zer x then 0 else if is_zer y then 0 else x ⋅ y)%ast. Definition e_itr' n (x: expr n n) := (if is_zer x then 0 else x^+)%ast. Definition e_str' n (x: expr n n) := (if is_zer x then 1 else x^*)%ast. Definition e_cnv' n m (x: expr n m) := (if is_zer x then 0 else x°)%ast. Fixpoint clean n m (x: expr n m): expr n m := match x with | e_zer _ _ => 0 | e_one _ => 1 | e_pls x y => e_pls' (clean x) (clean y) | e_dot x y => e_dot' (clean x) (clean y) | e_itr x => e_itr' (clean x) | e_str x => e_str' (clean x) | e_cnv x => e_cnv' (clean x) | e_var a => e_var a (** unused cases, ruled out later by level constraints *) | x => x end%ast. Lemma is_zer_clean n m (x: expr n m): is_clean x -> is_zer x = false. Proof. induction x; simpl; intuition. Qed. (** an expression reduces either to [0] or to a clean expression *) Lemma clean_spec n m (x: expr n m): e_level x ≪ BOT+CKA -> clean x = 0%ast \/ is_clean (clean x). Proof. induction x; simpl e_level; simpl clean; unfold e_pls', e_dot', e_itr', e_str', e_cnv'; simpl is_clean; try intuition discriminate_levels. intro Hl. destruct IHx1 as [->|IHx1]. solve_lower'. apply IHx2. solve_lower'. rewrite is_zer_clean by assumption. destruct IHx2 as [->|IHx2]. solve_lower'. now right. rewrite is_zer_clean by assumption. right; now split. intro Hl. destruct IHx1 as [->|IHx1]. solve_lower'. now left. rewrite is_zer_clean by assumption. destruct IHx2 as [->|IHx2]. solve_lower'. now left. rewrite is_zer_clean by assumption. right; now split. intro Hl. destruct IHx as [->|IHx]. solve_lower'. now left. rewrite is_zer_clean by assumption. now right. intro Hl. destruct IHx as [->|IHx]. solve_lower'. now right. rewrite is_zer_clean by assumption. simpl. tauto. intro Hl. destruct IHx as [->|IHx]. solve_lower'. now left. rewrite is_zer_clean by assumption. now right. Qed. (** simple tactic to do a case analysis on all encountered tests, in a bottom-up fashion *) Ltac destruct_tests := unfold e_pls', e_dot', e_itr', e_str', e_cnv'; simpl e_level; match goal with | |- context[is_zer ?x] => lazymatch x with context[is_zer _] => fail | _ => idtac end; case (is_zer_spec x); trivial; destruct_tests | _ => idtac end. (** the cleaning function does not increase the level of the expressions *) Lemma clean_level n m (x: expr n m): e_level (clean x) ≪ e_level x. Proof. induction x; try reflexivity; simpl clean; revert_prop; destruct_tests; simpl e_level; intros; solve_lower' || reflexivity. Qed. Lemma is_zer_level n m (x: expr n m): is_zer x -> BOT ≪ e_level x. Proof. case is_zer_spec. reflexivity. discriminate. Qed. (** if an expression reduces to [0], then [0] was appearing somewhere in that expression *) Lemma clean_0_level n m (x: expr n m): clean x = 0%ast -> BOT ≪ e_level x. Proof. rewrite <-clean_level. now intros ->. Qed. (** cleaning constructors are "correct": they correspond to their syntactic counterparts *) Lemma e_pls_weq l n m x y: `{CUP + e_level x + e_level y ≪ l} -> @e_pls' n m x y ==_[l] x+y. Proof. destruct_tests; intros; lattice. Qed. Lemma e_dot_weq l n m p x y: e_level x + e_level y ≪ l -> @e_dot' n m p x y ==_[l] x⋅y. Proof. destruct_tests; symmetry. apply dot0x. apply dotx0. Qed. Lemma e_itr_weq l n x: STR + e_level x ≪ l -> @e_itr' n x ==_[l] x^+. Proof. destruct_tests. intros. now rewrite itr0. Qed. Lemma e_str_weq l n x: STR + e_level x ≪ l -> @e_str' n x ==_[l] x^*. Proof. destruct_tests. intros. now rewrite str0. Qed. Lemma e_cnv_weq l n m x: CNV + e_level x ≪ l -> @e_cnv' n m x ==_[l] x°. Proof. destruct_tests. intros. now rewrite cnv0. Qed. (** the cleaning function thus returns an equivalent expression (at any level containing the operations appearing in that expression) *) Lemma clean_weq l n m (x: expr n m): e_level x ≪ l -> clean x ==_[l] x. Proof. induction x; simpl e_level; simpl clean; try reflexivity; rewrite ?merge_spec; intuition. rewrite e_pls_weq. apply cup_weq; auto. rewrite 2clean_level. solve_lower'. rewrite e_dot_weq. apply dot_weq; auto. rewrite 2clean_level. solve_lower'. rewrite e_itr_weq. apply itr_weq; auto. rewrite clean_level. solve_lower'. rewrite e_str_weq. apply str_weq; auto. rewrite clean_level. solve_lower'. rewrite e_cnv_weq. apply cnv_weq; auto. rewrite clean_level. solve_lower'. Qed. (** simple tactic to discriminate unsatisfiable constraints *) Ltac discr_levels Hl tac := repeat match goal with | |- _ ≪ _ -> _ => let Hl' := fresh "Hl" in intro Hl'; try ((rewrite Hl in Hl'; discriminate Hl') || tac Hl') | |- _ \/ _ => right end; unfold Reflexive, Transitive, Proper, respectful; simpl; unfold e_dot', e_pls', e_cnv', e_itr', e_str'. (** ** key lemma 1: equivalent expressions reduce to [0] simultaneously *) Lemma clean_leq_weq_0 l: l ≪ BOT+CKA -> forall n m (x y: expr n m), (x <==_[l] y -> clean y = 0%ast -> clean x = 0%ast) /\ (x ==_[l] y -> (clean x = 0%ast <-> clean y = 0%ast)). Proof. intros Hl. apply expr_leq_weq_ind. constructor; [constructor; [constructor|..] |..]; simpl ob in *; discr_levels Hl idtac; intro_vars; destruct_tests; intuition (congruence || discriminate). Qed. Corollary clean_leq_0 l: l ≪ BOT+CKA -> forall n m (x y: expr n m), x <==_[l] y -> clean y = 0%ast -> clean x = 0%ast. Proof. apply clean_leq_weq_0. Qed. (** the cleaning function is idempotent *) Lemma clean_idem n m (e: expr n m): clean (clean e) = clean e. Proof. induction e; simpl; trivial; destruct_tests; intros; simpl; rewrite ?IHe1, ?IHe2, ?IHe; destruct_tests; congruence. Qed. Lemma lower_bot h k: has_bot h = false -> h ≪ BOT + k -> h ≪ k. Proof. rewrite 2lower_spec. simpl. intros ->. intuition discriminate. Qed. (** ** key lemma 2: proofs with bottom elements can be factorised into a preliminary cleaning phase, followed by a "clean" proof which does not use bottom elements laws (we move from (in)equality proofs at level [BOT+l] to (in)equality proofs at level [l]) *) Lemma clean_factorise_leq_weq l: l ≪ BOT+CKA -> forall n m (x y: expr n m), (x <==_[BOT+l] y -> clean x = 0%ast \/ clean x <==_[l] clean y) /\ (x ==_[BOT+l] y -> clean x ==_[l] clean y). Proof. (* TOTHINK: on pourrait se passer de [clean_idem] en faisant une induction plus forte, où l'on garderait les hypothèses <==_[BOT+l] sous-jacentes, pour appliquer directement clean_leq_0 *) intros Hl. apply expr_leq_weq_ind. constructor; [constructor; [constructor|..] |..]; simpl ob in *; discr_levels Hl ltac:(fun Hl' => apply lower_bot in Hl'; [|reflexivity]); intro_vars; destruct_tests; try solve [intuition (reflexivity || discriminate)]. intros [Hx|Hxy]. now left. intros [Hy|Hyz]. left. revert Hy. generalize (clean_leq_0 Hl Hxy). now rewrite 2clean_idem. right. etransitivity; eassumption. rewrite weq_spec. split. intros [? ?]; split; now right. intros [[Hx|Hx] [Hy|Hy]]. rewrite Hx, Hy. now split. generalize (clean_leq_0 Hl Hy). rewrite 2clean_idem. intro Hy'. now rewrite Hx, Hy'. generalize (clean_leq_0 Hl Hx). rewrite 2clean_idem. intro Hx'. now rewrite Hx', Hy. now split. rewrite cup_spec. intuition discriminate. intros; apply dotA. intros; apply dot1x. intros; apply dotx1. intros ? ? ? [Hx|Hxy] H'. tauto. generalize (clean_leq_0 Hl Hxy). rewrite clean_idem. tauto. intros ? ? ? ? [Hx|Hxy] H'. tauto. destruct H' as [Hx'|Hxy']. tauto. generalize (clean_leq_0 Hl Hxy'). rewrite clean_idem. tauto. right. apply dot_leq; tauto. right. now rewrite dotplsx. right. now rewrite dotxpls. right. apply cnvdot_. intros; apply cnv_invol. intros ? ? [Hx|Hxy]. tauto. generalize (clean_leq_0 Hl Hxy). rewrite clean_idem. tauto. right. apply cnv_leq. tauto. right. apply cnv_ext. right. apply str_refl. right. apply str_cons. right. now rewrite dot1x. right. apply str_ind_l; intuition discriminate. right. now rewrite dotx1. right. apply str_ind_r; intuition discriminate. intros _ _. apply itr_str_l. Qed. Corollary clean_factorise_leq l: l ≪ BOT+CKA -> forall n m (x y: expr n m), x <==_[BOT+l] y -> clean x = 0%ast \/ clean x <==_[l] clean y. Proof. apply clean_factorise_leq_weq. Qed. End clean. (** * Untyping theorem for bottom-free structures *) Section e. Context {Sigma: cmpType}. Variables (s t: Sigma -> positive). Notation uexpr := (expr (fun _: Sigma => xH) (fun _: Sigma => xH) xH xH). Notation expr := (expr s t). (** evaluation predicate, to relate untyped expressions [uexpr] to typed ones [expr] this cannot be a function: - an untyped expressions can correspond to several typed expressions (e.g., [1] is mapped to all identities). - an untyped expression can possibly be ill-typed so that it does not map to any typed expression. *) Inductive eval: forall n m, uexpr -> expr n m -> Prop := | ev_one: forall n, @eval n n 1 1 | ev_pls: forall x y n m x' y', @eval n m x x' -> @eval n m y y' -> eval (x+y) (x'+y') | ev_dot: forall x y n m p x' y', @eval n m x x' -> @eval m p y y' -> eval (x⋅y) (x'⋅y') | ev_itr: forall x n x', @eval n n x x' -> eval (x^+) (x'^+) | ev_str: forall x n x', @eval n n x x' -> eval (x^* ) (x'^* ) | ev_cnv: forall x n m x', @eval n m x x' -> eval (x°) (x'°) | ev_var: forall a, eval (e_var a) (e_var a). Arguments eval : clear implicits. (** inversion lemmas for [eval], for all operations although the statements look straighforward, the proofs are a bit technical since we avoid JMeq/eq_dep axioms ; the fact that the types of our expressions ([ob expr_ops]) is a decidable type ([positive]) is crucial for that: it is decidable, so that we have strong elimination of dependent equalities. *) Lemma eval_pls n m x y z: eval n m (x+y) z -> exists x', eval n m x x' /\ exists y', eval n m y y' /\ z=e_pls x' y'. Proof. remember (x+y)%ast as z' eqn:E. destruct 1; try discriminate. rewrite <- (f_equal ((fun n m (e: syntax.expr _ _ n m) => match e in syntax.expr _ _ n m return syntax.expr _ _ n m with e_pls x _ => x | x => x end) _ _) E). rewrite <- (f_equal ((fun n m (e: syntax.expr _ _ n m) => match e in syntax.expr _ _ n m return syntax.expr _ _ n m with e_pls _ y => y | x => x end) _ _) E). eauto. Qed. Lemma eval_dot n m x y z: eval n m (x⋅y) z -> exists p, exists x', eval n p x x' /\ exists y', eval p m y y' /\ z=e_dot x' y'. Proof. remember (x⋅y)%ast as z' eqn:E. destruct 1; try discriminate. generalize (f_equal ((fun n m e => match e in syntax.expr _ _ n m return syntax.expr _ _ n xH with | @e_dot _ _ _ _ p _ x _ => match eqb_spec cmp_pos p xH with | reflect_t H => eq_rect _ (syntax.expr _ _ _) x _ H | _ => e_zer _ _ end | _ => e_zer _ _ end) xH xH) E). case eqb_spec. 2: congruence. intro. rewrite 2cmp_eq_rect_eq. intros <-. generalize (f_equal ((fun n m e => match e in syntax.expr _ _ n m return syntax.expr _ _ xH m with | @e_dot _ _ _ _ p _ _ x => match eqb_spec cmp_pos p xH with | reflect_t H => eq_rect _ (fun p => syntax.expr _ _ p _) x _ H | _ => e_zer _ _ end | _ => e_zer _ _ end) xH xH) E). case eqb_spec. 2: congruence. intro. rewrite 2cmp_eq_rect_eq. intros <-. eauto 6. Qed. Lemma eval_cnv n m x z: eval n m (x°) z -> exists x', eval m n x x' /\ z=e_cnv x'. Proof. remember (x°)%ast as z' eqn:E. destruct 1; try discriminate. rewrite <- (f_equal ((fun n m (e: syntax.expr _ _ n m) => match e with e_cnv x => x | _ => e_zer _ _ end) _ _) E). eauto. Qed. Lemma eval_one' n m z: eval n m 1 z -> n=m. Proof. inversion 1. assumption. Qed. Lemma eval_itr' n m x z: eval n m (x^+) z -> n=m. Proof. inversion 1. assumption. Qed. Lemma eval_str' n m x z: eval n m (x^* ) z -> n=m. Proof. inversion 1. assumption. Qed. Lemma eval_var' n m a z: eval n m (e_var a) z -> n=s a /\ m=t a. Proof. inversion 1. auto. Qed. Lemma eval_one n z: eval n n 1 z -> z=e_one n. Proof. assert (G: forall m u (z: expr n m), eval n m u z -> forall (H: n=m), u=e_one _ -> z = eq_rect _ (expr n) (e_one n) m H). intros m z' H. destruct 1; intros; try discriminate. now rewrite cmp_eq_rect_eq. intro Hz. apply (G _ _ _ Hz eq_refl eq_refl). Qed. Lemma eval_itr n x z: eval n n (x^+) z -> exists x', eval n n x x' /\ z = e_itr x'. Proof. assert (G: forall m u (z: expr n m), eval n m u z -> forall (H: n=m) x, u=e_itr x -> exists x', eval n n x x' /\ z = eq_rect _ (expr n) (e_itr x') m H). intros m z' H. destruct 1; intros E v Hv; try discriminate. rewrite <-(f_equal ((fun n m (e: syntax.expr _ _ n m) => match e in syntax.expr _ _ n m return syntax.expr _ _ n m with e_itr y => y | x => x end) _ _) Hv). eexists. split. eassumption. now rewrite cmp_eq_rect_eq. intro Hz. apply (G _ _ _ Hz eq_refl _ eq_refl). Qed. Lemma eval_str n x z: eval n n (x^* ) z -> exists x', eval n n x x' /\ z = e_str x'. Proof. assert (G: forall m u (z: expr n m), eval n m u z -> forall (H: n=m) x, u=e_str x -> exists x', eval n n x x' /\ z = eq_rect _ (expr n) (e_str x') m H). intros m z' H. destruct 1; intros E v Hv; try discriminate. rewrite <-(f_equal ((fun n m (e: syntax.expr _ _ n m) => match e in syntax.expr _ _ n m return syntax.expr _ _ n m with e_str y => y | x => x end) _ _) Hv). eexists. split. eassumption. now rewrite cmp_eq_rect_eq. intro Hz. apply (G _ _ _ Hz eq_refl _ eq_refl). Qed. Lemma eval_var a z: eval (s a) (t a) (e_var a) z -> z=e_var a. Proof. assert (G: forall n m u (z: expr n m), eval n m u z -> forall a (Hs: s a=n) (Ht: t a=m), u=e_var a -> z = eq_rect _ (fun m => expr m _) (eq_rect _ (expr _) (e_var a) m Ht) n Hs). intros n m u z'. destruct 1; intros b Hs Ht E; try discriminate. injection E. intros ->. now rewrite 2cmp_eq_rect_eq. intro Hz. apply (G _ _ _ _ Hz _ eq_refl eq_refl eq_refl). Qed. (** ** key lemma 3: injectivity of typing derivations an untyped expression cannot be mapper to two terms of distinct types, for which either the sources or the targets coincide. in other words, once the sources coincide, so do the targets, and conversely (this lemma does not hold in presence of bottom elements) *) Lemma eval_types n n' m m' x x' x'': eval n m x x' -> eval n' m' x x'' -> (n=n' <-> m=m'). Proof. intro H. revert n' m' x''. induction H; intros n' m' x'' H''. apply eval_one' in H''. subst. reflexivity. apply eval_pls in H'' as (?&?&?&?&?). eauto. apply eval_dot in H'' as (?&?&?&?&?&?). erewrite IHeval1 by eassumption. eauto. apply eval_itr' in H''. subst. reflexivity. apply eval_str' in H''. subst. reflexivity. apply eval_cnv in H'' as (?&?&?). symmetry. eauto. apply eval_var' in H'' as (?&?). subst. now split. Qed. Lemma eval_types_l n m m' x x' x'': eval n m x x' -> eval n m' x x'' -> m=m'. Proof. intros H H'. now apply (eval_types H H'). Qed. Lemma eval_types_r n m m' x x' x'': eval m n x x' -> eval m' n x x'' -> m=m'. Proof. intros H H'. now apply (eval_types H H'). Qed. (** we deduce that the [eval] predicate is functionnal once types are fixed *) Lemma eval_fun n m x x' x'': eval n m x x' -> eval n m x x'' -> x'=x''. Proof. induction 1; intro H'. now apply eval_one in H'. apply eval_pls in H' as (?&?&?&?&->). firstorder congruence. apply eval_dot in H' as (q&?&?&?&?&->). assert(m=q) by eauto using eval_types_l. subst. firstorder congruence. apply eval_itr in H' as (?&?&->). firstorder congruence. apply eval_str in H' as (?&?&->). firstorder congruence. apply eval_cnv in H' as (?&?&->). firstorder congruence. apply eval_var in H'. congruence. Qed. Section l. Variable l: level. (** ** main lemma *) (** we use the same trick as above to perform a mutual induction on the [leq/weq] predicates (unfortunately, we cannot reuse the above lemmas since we need to do an induction on untyped equalities.) *) (* TODO: factorise code, possibly by introducing explicitely an untyped syntax (and noting that the above cleaning function doesn't need to be typed) *) (* this definition is well-suited to sup-semilattices, we would need to take the dual one for inf-semilattice *) Definition u_leq x y := (forall n m y', eval n m y y' -> exists x', eval n m x x' /\ x' <==_[l] y'). Definition u_weq x y := (forall n m x', eval n m x x' -> exists y', eval n m y y' /\ x' ==_[l] y') /\ (forall n m y', eval n m y y' -> exists x', eval n m x x' /\ x' ==_[l] y'). Definition u_lattice_ops := {| car := uexpr; leq := u_leq; weq := u_weq; cup := @e_pls _ _ _ _ _; bot := e_zer _ _; (* below: these operations are not meaningful *) lattice.top := e_top _ _; cap := @e_cap _ _ _ _ _; neg := @e_neg _ _ _ _ _ |}. Definition u_ops := {| ob := unit; mor n m := u_lattice_ops; dot n m p := @e_dot _ _ _ _ _ _; one n := e_one _; itr n := @e_itr _ _ _ _; str n := @e_str _ _ _ _; cnv n m := @e_cnv _ _ _ _ _; (* below: these operations are not meaningful *) ldv n m p := @e_ldv _ _ _ _ _ _; rdv n m p := @e_rdv _ _ _ _ _ _ |}. Ltac eval_inv := repeat (match goal with | H: eval ?n ?m ?x _, H': eval ?n ?m ?x _ |- _ => apply (eval_fun H) in H' | H: eval ?n _ ?x _, H': eval ?n _ ?x _ |- _ => pose proof (eval_types_l H H') | H: eval _ ?n ?x _, H': eval _ ?n ?x _ |- _ => pose proof (eval_types_r H H') | H: eval _ _ _ _ |- _ => first [ apply eval_pls in H as (?&?&?&?&?) | apply eval_dot in H as (?&?&?&?&?&?) | apply eval_itr in H as (?&?&?) | apply eval_str in H as (?&?&?) | apply eval_cnv in H as (?&?&?) | apply eval_one in H | apply eval_var in H ] | H: eval _ _ 1 _ |- _ => pose proof (eval_one' H) | H: eval _ _ (_^+) _ |- _ => pose proof (eval_itr' H) | H: eval _ _ (_^* ) _ |- _ => pose proof (eval_str' H) | H: eval _ _ ?x _, H': u_leq _ ?x |- _ => pose proof H; apply H' in H as (?&?&?); clear H' | H: eval _ _ ?x _, H': u_weq _ ?x |- _ => pose proof H; apply H' in H as (?&?&?); clear H' | H: eval _ _ ?x _, H': u_weq ?x _ |- _ => pose proof H; apply H' in H as (?&?&?); clear H' end; subst). Ltac exists_eval := simpl; try split; repeat intro; eval_inv; eexists; (split; [repeat constructor; eassumption|]). Ltac not_involved Hl := let H := fresh in intro H; apply (lower_trans _ _ _ H) in Hl; discriminate Hl. Instance u_lattice_laws {Hl: l ≪ CKA}: lattice.laws l u_lattice_ops. Proof. constructor; try not_involved Hl. constructor. repeat intro; solve [eauto 6]. exists_eval. etransitivity; eassumption. split. exists_eval; hlattice. simpl. intros [? ?]. exists_eval; lattice. split. exists_eval. eapply cup_spec. rewrite cupC. eassumption. eapply cup_spec; eassumption. simpl. intros [? ?]. exists_eval; lattice. (* not that [bot] would work here, but [bot] breaks the unique typing property *) Qed. Instance u_laws {Hl: l ≪ CKA}: laws l u_ops. Proof. constructor; try not_involved Hl; repeat right. intros. apply u_lattice_laws. exists_eval; apply dotA. exists_eval; apply dot1x. exists_eval; apply dotx1. exists_eval. apply dot_leq; assumption. exists_eval. apply weq_leq. apply dotplsx. exists_eval. apply weq_leq. apply dotxpls. exists_eval. apply weq_leq. apply cnvdot. exists_eval; apply cnv_invol. exists_eval. apply cnv_leq; assumption. exists_eval. apply cnv_ext. exists_eval. apply str_refl. exists_eval. apply str_cons. exists_eval. apply str_ind_l; assumption. exists_eval. apply str_ind_r; assumption. exists_eval; apply itr_str_l. Qed. (** type erasing function: replace all types by a constant one (here, [xH]) *) Definition erase: forall n m, expr n m -> uexpr := @syntax.eval _ s t (expr_ops _ _ l) (fun _ => xH) (@e_var _ _ _). Lemma eval_erase: forall n m (x: expr n m), is_clean x -> eval n m (erase x) x. Proof. induction x; simpl (is_clean _); intro Hl; try discriminate Hl; try constructor; first [apply IHx1 | apply IHx2 | apply IHx | idtac]; tauto. Qed. Lemma syntax_eval_id n m (e: expr n m): syntax.eval (X:=u_ops) (f':=fun _ => tt) (@e_var _ _ _) (erase e) = erase e. Proof. induction e; simpl; unfold str; simpl; repeat f_equal; assumption. Qed. (** untyping theorem for bottom-free structures *) Theorem erase_faithful_leq_clean n m (x y: expr n m): is_clean x -> is_clean y -> l ≪ CKA -> erase x <==_[l] erase y -> x <==_[l] y. Proof. intros Hx Hy Hl H. assert (H': u_leq (erase x) (erase y)). rewrite <-(syntax_eval_id x), <-(syntax_eval_id y). apply (H u_ops), u_laws. apply eval_erase in Hx. apply eval_erase in Hy. apply H' in Hy as (x'&Hx'&Hxy). now rewrite (eval_fun Hx Hx'). Qed. End l. (** * Extension to structures with bottom element, by factorisation *) Lemma is_zer_erase l n m (e: expr n m): is_zer (erase l e) = is_zer e. Proof. destruct e; reflexivity. Qed. Lemma clean_erase h k n m (e: expr n m): clean (erase h e) = erase k (clean e). Proof. induction e; simpl; unfold e_pls', e_dot', e_itr', e_str', e_pls', e_cnv'; simpl in *; trivial; rewrite ?IHe1, ?IHe2, ?IHe, ?is_zer_erase; repeat (case is_zer; trivial). Qed. Lemma level_erase l n m (e: expr n m): e_level (erase l e) = e_level e. Proof. induction e; simpl in *; congruence. Qed. Lemma erase_0 l n m (e: expr n m): erase l e = 0%ast -> e = 0%ast. Proof. destruct e; discriminate || reflexivity. Qed. (** final untyping theorem for equalities *) Theorem erase_faithful_leq l n m (x y: expr n m): e_level x + e_level y ≪ l -> l ≪ BOT+CKA -> erase l x <==_[l] erase l y -> x <==_[l] y. Proof. (* TODO: reprendre cette preuve immonde *) intros Hxy Hl H. rewrite <-(clean_weq x), <-(clean_weq y) by solve_lower'. destruct (clean_spec x) as [Hx|Hx]. rewrite <-Hl; solve_lower'. rewrite Hx. rewrite <-clean_0_level in Hxy by assumption. lattice. destruct (clean_spec y) as [Hy|Hy]. rewrite <-Hl; solve_lower'. rewrite Hy. apply clean_leq_0 in H. rewrite (clean_erase l l) in H. apply erase_0 in H. now rewrite H. assumption. now rewrite (clean_erase l l), Hy. set (l' := (* l \ BOT *) mk_level (has_cup l) false (has_cap l) (has_top l) (has_str l) (has_cnv l) (has_neg l) (has_div l)). assert (L: l' ≪ l). rewrite lower_spec. intuition discriminate. assert (L': l ≪ BOT+l'). rewrite lower_spec. simpl. intuition. assert (G: clean x <==_[l'] clean y). apply erase_faithful_leq_clean. assumption. assumption. rewrite <-L in Hl. apply Hl. rewrite <-2(clean_erase l l'). destruct (fun Hl => clean_factorise_leq Hl (e_leq_weaken H)) as [H'|H']. now rewrite L. rewrite (clean_erase l l) in H'. apply erase_0 in H'. rewrite H' in Hx. inversion Hx. assumption. apply e_leq_weaken, G. Qed. (** final untyping theorem for (in)equalities *) Corollary erase_faithful_weq l n m (x y: expr n m): e_level x + e_level y ≪ l -> l ≪ BOT+CKA -> erase l x ==_[l] erase l y -> x ==_[l] y. Proof. intros Hxy Hl. rewrite 2weq_spec. split; apply erase_faithful_leq; intuition solve_lower'. Qed. End e.